Test Report: KVM_Linux_crio 19195

                    
                      3c49d247522650dad7be9dd4f792820e054aa6e4:2024-07-08:35243
                    
                

Test fail (30/320)

Order failed test Duration
30 TestAddons/parallel/Ingress 151.61
32 TestAddons/parallel/MetricsServer 348.05
45 TestAddons/StoppedEnableDisable 154.25
101 TestFunctional/parallel/MySQL 602.79
164 TestMultiControlPlane/serial/StopSecondaryNode 141.98
166 TestMultiControlPlane/serial/RestartSecondaryNode 48.77
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 402.35
171 TestMultiControlPlane/serial/StopCluster 141.78
231 TestMultiNode/serial/RestartKeepsNodes 311.24
233 TestMultiNode/serial/StopMultiNode 141.38
240 TestPreload 168.86
248 TestKubernetesUpgrade 375.92
260 TestStartStop/group/old-k8s-version/serial/FirstStart 295.39
285 TestStartStop/group/old-k8s-version/serial/DeployApp 0.59
287 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 111.79
297 TestStartStop/group/no-preload/serial/Stop 139.14
300 TestStartStop/group/old-k8s-version/serial/SecondStart 507.2
303 TestStartStop/group/embed-certs/serial/Stop 139.16
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.02
311 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 542.17
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.98
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.95
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.98
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 340.56
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 415.04
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 390.2
322 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.68
x
+
TestAddons/parallel/Ingress (151.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-268316 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-268316 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-268316 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5771cdad-38eb-4b69-9d82-5a58ef2c2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5771cdad-38eb-4b69-9d82-5a58ef2c2f4e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004345736s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-268316 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.813891221s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-268316 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.231
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-268316 addons disable ingress-dns --alsologtostderr -v=1: (1.640919337s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-268316 addons disable ingress --alsologtostderr -v=1: (7.975436621s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-268316 -n addons-268316
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-268316 logs -n 25: (1.352668277s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-972529 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC |                     |
	|         | -p download-only-972529                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| delete  | -p download-only-972529                                                                     | download-only-972529 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| delete  | -p download-only-548391                                                                     | download-only-548391 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| delete  | -p download-only-972529                                                                     | download-only-972529 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-230858 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC |                     |
	|         | binary-mirror-230858                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39545                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-230858                                                                     | binary-mirror-230858 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC |                     |
	|         | addons-268316                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC |                     |
	|         | addons-268316                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-268316 --wait=true                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC | 08 Jul 24 19:31 UTC |
	|         | -p addons-268316                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-268316 addons disable                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC | 08 Jul 24 19:31 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-268316 ip                                                                            | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC | 08 Jul 24 19:31 UTC |
	| addons  | addons-268316 addons disable                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC | 08 Jul 24 19:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC | 08 Jul 24 19:31 UTC |
	|         | addons-268316                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-268316 ssh curl -s                                                                   | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-268316 ssh cat                                                                       | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:32 UTC | 08 Jul 24 19:32 UTC |
	|         | /opt/local-path-provisioner/pvc-fe0dcfdc-b3e9-41ce-a1cc-00fdfd88c367_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-268316 addons disable                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:32 UTC | 08 Jul 24 19:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:32 UTC | 08 Jul 24 19:32 UTC |
	|         | -p addons-268316                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:32 UTC | 08 Jul 24 19:32 UTC |
	|         | addons-268316                                                                               |                      |         |         |                     |                     |
	| addons  | addons-268316 addons                                                                        | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:32 UTC | 08 Jul 24 19:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-268316 addons                                                                        | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:33 UTC | 08 Jul 24 19:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-268316 ip                                                                            | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:34 UTC | 08 Jul 24 19:34 UTC |
	| addons  | addons-268316 addons disable                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:34 UTC | 08 Jul 24 19:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-268316 addons disable                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:34 UTC | 08 Jul 24 19:34 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 19:29:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 19:29:12.804120   13764 out.go:291] Setting OutFile to fd 1 ...
	I0708 19:29:12.804225   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:29:12.804234   13764 out.go:304] Setting ErrFile to fd 2...
	I0708 19:29:12.804238   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:29:12.804419   13764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 19:29:12.805003   13764 out.go:298] Setting JSON to false
	I0708 19:29:12.805783   13764 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":702,"bootTime":1720466251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 19:29:12.805840   13764 start.go:139] virtualization: kvm guest
	I0708 19:29:12.808052   13764 out.go:177] * [addons-268316] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 19:29:12.809555   13764 notify.go:220] Checking for updates...
	I0708 19:29:12.809604   13764 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 19:29:12.811054   13764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 19:29:12.812597   13764 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:29:12.813976   13764 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:29:12.815480   13764 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 19:29:12.817060   13764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 19:29:12.818707   13764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 19:29:12.850625   13764 out.go:177] * Using the kvm2 driver based on user configuration
	I0708 19:29:12.851864   13764 start.go:297] selected driver: kvm2
	I0708 19:29:12.851880   13764 start.go:901] validating driver "kvm2" against <nil>
	I0708 19:29:12.851891   13764 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 19:29:12.852594   13764 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 19:29:12.852671   13764 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 19:29:12.867676   13764 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 19:29:12.867735   13764 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 19:29:12.868003   13764 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 19:29:12.868082   13764 cni.go:84] Creating CNI manager for ""
	I0708 19:29:12.868099   13764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 19:29:12.868111   13764 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 19:29:12.868185   13764 start.go:340] cluster config:
	{Name:addons-268316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-268316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:29:12.868312   13764 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 19:29:12.871172   13764 out.go:177] * Starting "addons-268316" primary control-plane node in "addons-268316" cluster
	I0708 19:29:12.872622   13764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:29:12.872659   13764 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 19:29:12.872666   13764 cache.go:56] Caching tarball of preloaded images
	I0708 19:29:12.872735   13764 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 19:29:12.872744   13764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 19:29:12.873042   13764 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/config.json ...
	I0708 19:29:12.873061   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/config.json: {Name:mk16b7cb24f23e9d6b1a688b3b1b6627cd8a91c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:12.873215   13764 start.go:360] acquireMachinesLock for addons-268316: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 19:29:12.873261   13764 start.go:364] duration metric: took 33.304µs to acquireMachinesLock for "addons-268316"
	I0708 19:29:12.873278   13764 start.go:93] Provisioning new machine with config: &{Name:addons-268316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-268316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:29:12.873330   13764 start.go:125] createHost starting for "" (driver="kvm2")
	I0708 19:29:12.874996   13764 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0708 19:29:12.875125   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:29:12.875168   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:29:12.889448   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0708 19:29:12.889900   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:29:12.890466   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:29:12.890482   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:29:12.890773   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:29:12.890969   13764 main.go:141] libmachine: (addons-268316) Calling .GetMachineName
	I0708 19:29:12.891097   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:12.891248   13764 start.go:159] libmachine.API.Create for "addons-268316" (driver="kvm2")
	I0708 19:29:12.891280   13764 client.go:168] LocalClient.Create starting
	I0708 19:29:12.891326   13764 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 19:29:13.345276   13764 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 19:29:13.434731   13764 main.go:141] libmachine: Running pre-create checks...
	I0708 19:29:13.434757   13764 main.go:141] libmachine: (addons-268316) Calling .PreCreateCheck
	I0708 19:29:13.435305   13764 main.go:141] libmachine: (addons-268316) Calling .GetConfigRaw
	I0708 19:29:13.435760   13764 main.go:141] libmachine: Creating machine...
	I0708 19:29:13.435777   13764 main.go:141] libmachine: (addons-268316) Calling .Create
	I0708 19:29:13.435962   13764 main.go:141] libmachine: (addons-268316) Creating KVM machine...
	I0708 19:29:13.437298   13764 main.go:141] libmachine: (addons-268316) DBG | found existing default KVM network
	I0708 19:29:13.438154   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:13.438024   13786 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0708 19:29:13.438207   13764 main.go:141] libmachine: (addons-268316) DBG | created network xml: 
	I0708 19:29:13.438228   13764 main.go:141] libmachine: (addons-268316) DBG | <network>
	I0708 19:29:13.438235   13764 main.go:141] libmachine: (addons-268316) DBG |   <name>mk-addons-268316</name>
	I0708 19:29:13.438242   13764 main.go:141] libmachine: (addons-268316) DBG |   <dns enable='no'/>
	I0708 19:29:13.438248   13764 main.go:141] libmachine: (addons-268316) DBG |   
	I0708 19:29:13.438256   13764 main.go:141] libmachine: (addons-268316) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0708 19:29:13.438264   13764 main.go:141] libmachine: (addons-268316) DBG |     <dhcp>
	I0708 19:29:13.438270   13764 main.go:141] libmachine: (addons-268316) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0708 19:29:13.438299   13764 main.go:141] libmachine: (addons-268316) DBG |     </dhcp>
	I0708 19:29:13.438313   13764 main.go:141] libmachine: (addons-268316) DBG |   </ip>
	I0708 19:29:13.438321   13764 main.go:141] libmachine: (addons-268316) DBG |   
	I0708 19:29:13.438335   13764 main.go:141] libmachine: (addons-268316) DBG | </network>
	I0708 19:29:13.438349   13764 main.go:141] libmachine: (addons-268316) DBG | 
	I0708 19:29:13.443833   13764 main.go:141] libmachine: (addons-268316) DBG | trying to create private KVM network mk-addons-268316 192.168.39.0/24...
	I0708 19:29:13.509625   13764 main.go:141] libmachine: (addons-268316) DBG | private KVM network mk-addons-268316 192.168.39.0/24 created
	I0708 19:29:13.509661   13764 main.go:141] libmachine: (addons-268316) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316 ...
	I0708 19:29:13.509684   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:13.509601   13786 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:29:13.509705   13764 main.go:141] libmachine: (addons-268316) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 19:29:13.509785   13764 main.go:141] libmachine: (addons-268316) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 19:29:13.754837   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:13.754690   13786 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa...
	I0708 19:29:13.824387   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:13.824259   13786 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/addons-268316.rawdisk...
	I0708 19:29:13.824416   13764 main.go:141] libmachine: (addons-268316) DBG | Writing magic tar header
	I0708 19:29:13.824426   13764 main.go:141] libmachine: (addons-268316) DBG | Writing SSH key tar header
	I0708 19:29:13.824434   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:13.824379   13786 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316 ...
	I0708 19:29:13.824559   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316 (perms=drwx------)
	I0708 19:29:13.824589   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316
	I0708 19:29:13.824601   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 19:29:13.824611   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 19:29:13.824626   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:29:13.824636   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 19:29:13.824652   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 19:29:13.824667   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 19:29:13.824676   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins
	I0708 19:29:13.824691   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home
	I0708 19:29:13.824702   13764 main.go:141] libmachine: (addons-268316) DBG | Skipping /home - not owner
	I0708 19:29:13.824753   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 19:29:13.824800   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 19:29:13.824816   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 19:29:13.824830   13764 main.go:141] libmachine: (addons-268316) Creating domain...
	I0708 19:29:13.825723   13764 main.go:141] libmachine: (addons-268316) define libvirt domain using xml: 
	I0708 19:29:13.825751   13764 main.go:141] libmachine: (addons-268316) <domain type='kvm'>
	I0708 19:29:13.825761   13764 main.go:141] libmachine: (addons-268316)   <name>addons-268316</name>
	I0708 19:29:13.825769   13764 main.go:141] libmachine: (addons-268316)   <memory unit='MiB'>4000</memory>
	I0708 19:29:13.825777   13764 main.go:141] libmachine: (addons-268316)   <vcpu>2</vcpu>
	I0708 19:29:13.825782   13764 main.go:141] libmachine: (addons-268316)   <features>
	I0708 19:29:13.825790   13764 main.go:141] libmachine: (addons-268316)     <acpi/>
	I0708 19:29:13.825799   13764 main.go:141] libmachine: (addons-268316)     <apic/>
	I0708 19:29:13.825807   13764 main.go:141] libmachine: (addons-268316)     <pae/>
	I0708 19:29:13.825814   13764 main.go:141] libmachine: (addons-268316)     
	I0708 19:29:13.825845   13764 main.go:141] libmachine: (addons-268316)   </features>
	I0708 19:29:13.825866   13764 main.go:141] libmachine: (addons-268316)   <cpu mode='host-passthrough'>
	I0708 19:29:13.825894   13764 main.go:141] libmachine: (addons-268316)   
	I0708 19:29:13.825925   13764 main.go:141] libmachine: (addons-268316)   </cpu>
	I0708 19:29:13.825935   13764 main.go:141] libmachine: (addons-268316)   <os>
	I0708 19:29:13.825943   13764 main.go:141] libmachine: (addons-268316)     <type>hvm</type>
	I0708 19:29:13.825949   13764 main.go:141] libmachine: (addons-268316)     <boot dev='cdrom'/>
	I0708 19:29:13.825959   13764 main.go:141] libmachine: (addons-268316)     <boot dev='hd'/>
	I0708 19:29:13.825968   13764 main.go:141] libmachine: (addons-268316)     <bootmenu enable='no'/>
	I0708 19:29:13.825978   13764 main.go:141] libmachine: (addons-268316)   </os>
	I0708 19:29:13.825986   13764 main.go:141] libmachine: (addons-268316)   <devices>
	I0708 19:29:13.826002   13764 main.go:141] libmachine: (addons-268316)     <disk type='file' device='cdrom'>
	I0708 19:29:13.826018   13764 main.go:141] libmachine: (addons-268316)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/boot2docker.iso'/>
	I0708 19:29:13.826031   13764 main.go:141] libmachine: (addons-268316)       <target dev='hdc' bus='scsi'/>
	I0708 19:29:13.826040   13764 main.go:141] libmachine: (addons-268316)       <readonly/>
	I0708 19:29:13.826045   13764 main.go:141] libmachine: (addons-268316)     </disk>
	I0708 19:29:13.826052   13764 main.go:141] libmachine: (addons-268316)     <disk type='file' device='disk'>
	I0708 19:29:13.826063   13764 main.go:141] libmachine: (addons-268316)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 19:29:13.826082   13764 main.go:141] libmachine: (addons-268316)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/addons-268316.rawdisk'/>
	I0708 19:29:13.826094   13764 main.go:141] libmachine: (addons-268316)       <target dev='hda' bus='virtio'/>
	I0708 19:29:13.826114   13764 main.go:141] libmachine: (addons-268316)     </disk>
	I0708 19:29:13.826125   13764 main.go:141] libmachine: (addons-268316)     <interface type='network'>
	I0708 19:29:13.826138   13764 main.go:141] libmachine: (addons-268316)       <source network='mk-addons-268316'/>
	I0708 19:29:13.826149   13764 main.go:141] libmachine: (addons-268316)       <model type='virtio'/>
	I0708 19:29:13.826161   13764 main.go:141] libmachine: (addons-268316)     </interface>
	I0708 19:29:13.826171   13764 main.go:141] libmachine: (addons-268316)     <interface type='network'>
	I0708 19:29:13.826183   13764 main.go:141] libmachine: (addons-268316)       <source network='default'/>
	I0708 19:29:13.826194   13764 main.go:141] libmachine: (addons-268316)       <model type='virtio'/>
	I0708 19:29:13.826206   13764 main.go:141] libmachine: (addons-268316)     </interface>
	I0708 19:29:13.826216   13764 main.go:141] libmachine: (addons-268316)     <serial type='pty'>
	I0708 19:29:13.826232   13764 main.go:141] libmachine: (addons-268316)       <target port='0'/>
	I0708 19:29:13.826245   13764 main.go:141] libmachine: (addons-268316)     </serial>
	I0708 19:29:13.826253   13764 main.go:141] libmachine: (addons-268316)     <console type='pty'>
	I0708 19:29:13.826264   13764 main.go:141] libmachine: (addons-268316)       <target type='serial' port='0'/>
	I0708 19:29:13.826272   13764 main.go:141] libmachine: (addons-268316)     </console>
	I0708 19:29:13.826276   13764 main.go:141] libmachine: (addons-268316)     <rng model='virtio'>
	I0708 19:29:13.826285   13764 main.go:141] libmachine: (addons-268316)       <backend model='random'>/dev/random</backend>
	I0708 19:29:13.826292   13764 main.go:141] libmachine: (addons-268316)     </rng>
	I0708 19:29:13.826297   13764 main.go:141] libmachine: (addons-268316)     
	I0708 19:29:13.826308   13764 main.go:141] libmachine: (addons-268316)     
	I0708 19:29:13.826315   13764 main.go:141] libmachine: (addons-268316)   </devices>
	I0708 19:29:13.826320   13764 main.go:141] libmachine: (addons-268316) </domain>
	I0708 19:29:13.826343   13764 main.go:141] libmachine: (addons-268316) 
	I0708 19:29:13.831896   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:5f:ef:35 in network default
	I0708 19:29:13.832463   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:13.832503   13764 main.go:141] libmachine: (addons-268316) Ensuring networks are active...
	I0708 19:29:13.833151   13764 main.go:141] libmachine: (addons-268316) Ensuring network default is active
	I0708 19:29:13.833457   13764 main.go:141] libmachine: (addons-268316) Ensuring network mk-addons-268316 is active
	I0708 19:29:13.834053   13764 main.go:141] libmachine: (addons-268316) Getting domain xml...
	I0708 19:29:13.834844   13764 main.go:141] libmachine: (addons-268316) Creating domain...
	I0708 19:29:15.232307   13764 main.go:141] libmachine: (addons-268316) Waiting to get IP...
	I0708 19:29:15.233240   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:15.233767   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:15.233791   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:15.233740   13786 retry.go:31] will retry after 306.13701ms: waiting for machine to come up
	I0708 19:29:15.541108   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:15.541535   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:15.541554   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:15.541494   13786 retry.go:31] will retry after 297.323999ms: waiting for machine to come up
	I0708 19:29:15.839831   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:15.840232   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:15.840259   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:15.840178   13786 retry.go:31] will retry after 456.898587ms: waiting for machine to come up
	I0708 19:29:16.298829   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:16.299238   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:16.299261   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:16.299185   13786 retry.go:31] will retry after 415.573876ms: waiting for machine to come up
	I0708 19:29:16.716754   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:16.717134   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:16.717173   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:16.717076   13786 retry.go:31] will retry after 520.428467ms: waiting for machine to come up
	I0708 19:29:17.239014   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:17.239555   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:17.239588   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:17.239518   13786 retry.go:31] will retry after 669.632948ms: waiting for machine to come up
	I0708 19:29:17.911160   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:17.911608   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:17.911631   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:17.911568   13786 retry.go:31] will retry after 1.141733478s: waiting for machine to come up
	I0708 19:29:19.054876   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:19.055391   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:19.055412   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:19.055352   13786 retry.go:31] will retry after 974.557592ms: waiting for machine to come up
	I0708 19:29:20.031693   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:20.032130   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:20.032174   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:20.032108   13786 retry.go:31] will retry after 1.303729308s: waiting for machine to come up
	I0708 19:29:21.337418   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:21.337813   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:21.337833   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:21.337779   13786 retry.go:31] will retry after 2.103034523s: waiting for machine to come up
	I0708 19:29:23.441869   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:23.442401   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:23.442428   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:23.442341   13786 retry.go:31] will retry after 2.055610278s: waiting for machine to come up
	I0708 19:29:25.500460   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:25.500781   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:25.500804   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:25.500741   13786 retry.go:31] will retry after 2.588112058s: waiting for machine to come up
	I0708 19:29:28.089986   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:28.090395   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:28.090413   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:28.090353   13786 retry.go:31] will retry after 2.767394929s: waiting for machine to come up
	I0708 19:29:30.861280   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:30.861656   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:30.861684   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:30.861604   13786 retry.go:31] will retry after 3.925819648s: waiting for machine to come up
	I0708 19:29:34.789404   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:34.789865   13764 main.go:141] libmachine: (addons-268316) Found IP for machine: 192.168.39.231
	I0708 19:29:34.789888   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has current primary IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:34.789894   13764 main.go:141] libmachine: (addons-268316) Reserving static IP address...
	I0708 19:29:34.790335   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find host DHCP lease matching {name: "addons-268316", mac: "52:54:00:43:46:2e", ip: "192.168.39.231"} in network mk-addons-268316
	I0708 19:29:34.861738   13764 main.go:141] libmachine: (addons-268316) Reserved static IP address: 192.168.39.231
	I0708 19:29:34.861777   13764 main.go:141] libmachine: (addons-268316) DBG | Getting to WaitForSSH function...
	I0708 19:29:34.861786   13764 main.go:141] libmachine: (addons-268316) Waiting for SSH to be available...
	I0708 19:29:34.864294   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:34.864943   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:34.864967   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:34.865185   13764 main.go:141] libmachine: (addons-268316) DBG | Using SSH client type: external
	I0708 19:29:34.865209   13764 main.go:141] libmachine: (addons-268316) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa (-rw-------)
	I0708 19:29:34.865244   13764 main.go:141] libmachine: (addons-268316) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 19:29:34.865264   13764 main.go:141] libmachine: (addons-268316) DBG | About to run SSH command:
	I0708 19:29:34.865293   13764 main.go:141] libmachine: (addons-268316) DBG | exit 0
	I0708 19:29:35.000138   13764 main.go:141] libmachine: (addons-268316) DBG | SSH cmd err, output: <nil>: 
	I0708 19:29:35.000434   13764 main.go:141] libmachine: (addons-268316) KVM machine creation complete!
	I0708 19:29:35.000759   13764 main.go:141] libmachine: (addons-268316) Calling .GetConfigRaw
	I0708 19:29:35.001272   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:35.001471   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:35.001621   13764 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 19:29:35.001635   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:29:35.002809   13764 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 19:29:35.002825   13764 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 19:29:35.002837   13764 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 19:29:35.002843   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.005239   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.005513   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.005538   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.005658   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:35.005825   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.005984   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.006149   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:35.006304   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:35.006479   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:35.006490   13764 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 19:29:35.123016   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:29:35.123043   13764 main.go:141] libmachine: Detecting the provisioner...
	I0708 19:29:35.123054   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.127185   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.127572   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.127605   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.127736   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:35.127957   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.128148   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.128296   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:35.128450   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:35.128652   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:35.128671   13764 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 19:29:35.244615   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 19:29:35.244702   13764 main.go:141] libmachine: found compatible host: buildroot
	I0708 19:29:35.244714   13764 main.go:141] libmachine: Provisioning with buildroot...
	I0708 19:29:35.244724   13764 main.go:141] libmachine: (addons-268316) Calling .GetMachineName
	I0708 19:29:35.245014   13764 buildroot.go:166] provisioning hostname "addons-268316"
	I0708 19:29:35.245039   13764 main.go:141] libmachine: (addons-268316) Calling .GetMachineName
	I0708 19:29:35.245229   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.248071   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.248519   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.248544   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.248744   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:35.248969   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.249166   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.249315   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:35.249455   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:35.249623   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:35.249643   13764 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-268316 && echo "addons-268316" | sudo tee /etc/hostname
	I0708 19:29:35.379013   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-268316
	
	I0708 19:29:35.379051   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.382288   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.382657   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.382692   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.382919   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:35.383115   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.383268   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.383415   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:35.383597   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:35.383760   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:35.383776   13764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-268316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-268316/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-268316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 19:29:35.509773   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:29:35.509798   13764 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 19:29:35.509818   13764 buildroot.go:174] setting up certificates
	I0708 19:29:35.509838   13764 provision.go:84] configureAuth start
	I0708 19:29:35.509847   13764 main.go:141] libmachine: (addons-268316) Calling .GetMachineName
	I0708 19:29:35.510133   13764 main.go:141] libmachine: (addons-268316) Calling .GetIP
	I0708 19:29:35.512876   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.513246   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.513277   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.513402   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.515506   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.515875   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.515911   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.516070   13764 provision.go:143] copyHostCerts
	I0708 19:29:35.516131   13764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 19:29:35.516277   13764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 19:29:35.516337   13764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 19:29:35.516403   13764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.addons-268316 san=[127.0.0.1 192.168.39.231 addons-268316 localhost minikube]
	I0708 19:29:35.849960   13764 provision.go:177] copyRemoteCerts
	I0708 19:29:35.850015   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 19:29:35.850034   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.852585   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.852861   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.852887   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.853046   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:35.853231   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.853375   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:35.853478   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:29:35.942985   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 19:29:35.970444   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 19:29:35.995509   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0708 19:29:36.021443   13764 provision.go:87] duration metric: took 511.590281ms to configureAuth
	I0708 19:29:36.021480   13764 buildroot.go:189] setting minikube options for container-runtime
	I0708 19:29:36.021696   13764 config.go:182] Loaded profile config "addons-268316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:29:36.021786   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:36.024731   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.025122   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.025159   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.025303   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:36.025546   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.025771   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.025933   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:36.026161   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:36.026370   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:36.026393   13764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 19:29:36.450335   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 19:29:36.450358   13764 main.go:141] libmachine: Checking connection to Docker...
	I0708 19:29:36.450366   13764 main.go:141] libmachine: (addons-268316) Calling .GetURL
	I0708 19:29:36.451405   13764 main.go:141] libmachine: (addons-268316) DBG | Using libvirt version 6000000
	I0708 19:29:36.453720   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.454074   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.454095   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.454289   13764 main.go:141] libmachine: Docker is up and running!
	I0708 19:29:36.454306   13764 main.go:141] libmachine: Reticulating splines...
	I0708 19:29:36.454312   13764 client.go:171] duration metric: took 23.563023008s to LocalClient.Create
	I0708 19:29:36.454333   13764 start.go:167] duration metric: took 23.563088586s to libmachine.API.Create "addons-268316"
	I0708 19:29:36.454349   13764 start.go:293] postStartSetup for "addons-268316" (driver="kvm2")
	I0708 19:29:36.454360   13764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 19:29:36.454375   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:36.454577   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 19:29:36.454600   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:36.456743   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.457104   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.457131   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.457289   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:36.457458   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.457688   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:36.457865   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:29:36.550754   13764 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 19:29:36.555548   13764 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 19:29:36.555581   13764 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 19:29:36.555655   13764 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 19:29:36.555684   13764 start.go:296] duration metric: took 101.328003ms for postStartSetup
	I0708 19:29:36.555725   13764 main.go:141] libmachine: (addons-268316) Calling .GetConfigRaw
	I0708 19:29:36.604342   13764 main.go:141] libmachine: (addons-268316) Calling .GetIP
	I0708 19:29:36.607210   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.607552   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.607594   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.607833   13764 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/config.json ...
	I0708 19:29:36.608008   13764 start.go:128] duration metric: took 23.734668795s to createHost
	I0708 19:29:36.608028   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:36.610293   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.610672   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.610699   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.610832   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:36.611032   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.611225   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.611369   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:36.611529   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:36.611724   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:36.611739   13764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 19:29:36.729030   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720466976.702143206
	
	I0708 19:29:36.729055   13764 fix.go:216] guest clock: 1720466976.702143206
	I0708 19:29:36.729064   13764 fix.go:229] Guest: 2024-07-08 19:29:36.702143206 +0000 UTC Remote: 2024-07-08 19:29:36.608018885 +0000 UTC m=+23.838704072 (delta=94.124321ms)
	I0708 19:29:36.729110   13764 fix.go:200] guest clock delta is within tolerance: 94.124321ms
	I0708 19:29:36.729118   13764 start.go:83] releasing machines lock for "addons-268316", held for 23.855846693s
	I0708 19:29:36.729146   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:36.729424   13764 main.go:141] libmachine: (addons-268316) Calling .GetIP
	I0708 19:29:36.732044   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.732466   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.732492   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.732677   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:36.733152   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:36.733338   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:36.733424   13764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 19:29:36.733455   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:36.733556   13764 ssh_runner.go:195] Run: cat /version.json
	I0708 19:29:36.733585   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:36.736459   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.736765   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.736816   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.736837   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.737026   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:36.737108   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.737127   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.737242   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.737312   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:36.737458   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:36.737482   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.737650   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:36.737645   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:29:36.737804   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:29:36.816266   13764 ssh_runner.go:195] Run: systemctl --version
	I0708 19:29:36.846706   13764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 19:29:37.055702   13764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 19:29:37.061802   13764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 19:29:37.061882   13764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 19:29:37.079087   13764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 19:29:37.079109   13764 start.go:494] detecting cgroup driver to use...
	I0708 19:29:37.079181   13764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 19:29:37.097180   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 19:29:37.112232   13764 docker.go:217] disabling cri-docker service (if available) ...
	I0708 19:29:37.112289   13764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 19:29:37.126575   13764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 19:29:37.141094   13764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 19:29:37.261710   13764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 19:29:37.417238   13764 docker.go:233] disabling docker service ...
	I0708 19:29:37.417315   13764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 19:29:37.431461   13764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 19:29:37.443941   13764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 19:29:37.560663   13764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 19:29:37.678902   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 19:29:37.694003   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 19:29:37.713565   13764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 19:29:37.713638   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.724284   13764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 19:29:37.724367   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.734950   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.745884   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.756414   13764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 19:29:37.767047   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.777426   13764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.796222   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.806922   13764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 19:29:37.816639   13764 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 19:29:37.816698   13764 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 19:29:37.829347   13764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 19:29:37.838920   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:29:37.945497   13764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 19:29:38.090753   13764 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 19:29:38.090839   13764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 19:29:38.095412   13764 start.go:562] Will wait 60s for crictl version
	I0708 19:29:38.095501   13764 ssh_runner.go:195] Run: which crictl
	I0708 19:29:38.099181   13764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 19:29:38.138719   13764 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 19:29:38.138826   13764 ssh_runner.go:195] Run: crio --version
	I0708 19:29:38.167181   13764 ssh_runner.go:195] Run: crio --version
	I0708 19:29:38.197411   13764 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 19:29:38.198831   13764 main.go:141] libmachine: (addons-268316) Calling .GetIP
	I0708 19:29:38.201380   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:38.201695   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:38.201720   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:38.201899   13764 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 19:29:38.205967   13764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:29:38.218881   13764 kubeadm.go:877] updating cluster {Name:addons-268316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-268316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 19:29:38.218982   13764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:29:38.219023   13764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 19:29:38.250951   13764 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 19:29:38.251013   13764 ssh_runner.go:195] Run: which lz4
	I0708 19:29:38.255199   13764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 19:29:38.259632   13764 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 19:29:38.259661   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 19:29:39.590953   13764 crio.go:462] duration metric: took 1.335779016s to copy over tarball
	I0708 19:29:39.591045   13764 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 19:29:41.849602   13764 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258531874s)
	I0708 19:29:41.849625   13764 crio.go:469] duration metric: took 2.258635163s to extract the tarball
	I0708 19:29:41.849631   13764 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 19:29:41.886974   13764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 19:29:41.927676   13764 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 19:29:41.927698   13764 cache_images.go:84] Images are preloaded, skipping loading
	I0708 19:29:41.927706   13764 kubeadm.go:928] updating node { 192.168.39.231 8443 v1.30.2 crio true true} ...
	I0708 19:29:41.927832   13764 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-268316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-268316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 19:29:41.927901   13764 ssh_runner.go:195] Run: crio config
	I0708 19:29:41.975249   13764 cni.go:84] Creating CNI manager for ""
	I0708 19:29:41.975268   13764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 19:29:41.975279   13764 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 19:29:41.975302   13764 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-268316 NodeName:addons-268316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 19:29:41.975490   13764 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-268316"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 19:29:41.975564   13764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 19:29:41.985569   13764 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 19:29:41.985634   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 19:29:41.995284   13764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0708 19:29:42.011893   13764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 19:29:42.028663   13764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0708 19:29:42.045293   13764 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I0708 19:29:42.049356   13764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:29:42.061843   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:29:42.196088   13764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:29:42.214687   13764 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316 for IP: 192.168.39.231
	I0708 19:29:42.214714   13764 certs.go:194] generating shared ca certs ...
	I0708 19:29:42.214736   13764 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.214897   13764 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 19:29:42.339367   13764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt ...
	I0708 19:29:42.339393   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt: {Name:mka05d1dc67457a4777c0b3766c00234c397468e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.339582   13764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key ...
	I0708 19:29:42.339600   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key: {Name:mk76fee786db566d7f6df1d0853aed58c25bc81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.339702   13764 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 19:29:42.458532   13764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt ...
	I0708 19:29:42.458559   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt: {Name:mkc8726977bf64262519c5d749001a3b31213a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.458739   13764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key ...
	I0708 19:29:42.458759   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key: {Name:mk4b7a4888e6f070dec0196575192264fc2860e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.458852   13764 certs.go:256] generating profile certs ...
	I0708 19:29:42.458917   13764 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.key
	I0708 19:29:42.458947   13764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt with IP's: []
	I0708 19:29:42.648669   13764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt ...
	I0708 19:29:42.648699   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: {Name:mk77ac657d40a5d25957426be28dc19433a1fb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.648883   13764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.key ...
	I0708 19:29:42.648897   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.key: {Name:mkcad793ce6a2810562e5b9e54a4148a2a5b1c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.648996   13764 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key.e1d3a00c
	I0708 19:29:42.649019   13764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt.e1d3a00c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231]
	I0708 19:29:42.793284   13764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt.e1d3a00c ...
	I0708 19:29:42.793313   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt.e1d3a00c: {Name:mk1de88b64afb0e9940cf1ca3c7888adeb37451a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.793483   13764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key.e1d3a00c ...
	I0708 19:29:42.793500   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key.e1d3a00c: {Name:mkd8c322a4a8cbf764b246990122ec9ebfd75ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.793592   13764 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt.e1d3a00c -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt
	I0708 19:29:42.793667   13764 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key.e1d3a00c -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key
	I0708 19:29:42.793710   13764 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.key
	I0708 19:29:42.793727   13764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.crt with IP's: []
	I0708 19:29:43.203977   13764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.crt ...
	I0708 19:29:43.204009   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.crt: {Name:mkce7f3d2364a421e69951326bec58c6360dcaf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:43.204172   13764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.key ...
	I0708 19:29:43.204182   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.key: {Name:mk403da6925ba5cfcfe7c85e5000cc2b8ff2127d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:43.204333   13764 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 19:29:43.204364   13764 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 19:29:43.204388   13764 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 19:29:43.204410   13764 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 19:29:43.204961   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 19:29:43.237802   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 19:29:43.265345   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 19:29:43.291776   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 19:29:43.317926   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0708 19:29:43.342970   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 19:29:43.368280   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 19:29:43.394357   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 19:29:43.421397   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 19:29:43.449031   13764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 19:29:43.467140   13764 ssh_runner.go:195] Run: openssl version
	I0708 19:29:43.473260   13764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 19:29:43.484318   13764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:29:43.489247   13764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:29:43.489308   13764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:29:43.495332   13764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 19:29:43.506102   13764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 19:29:43.510657   13764 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 19:29:43.510702   13764 kubeadm.go:391] StartCluster: {Name:addons-268316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-268316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:29:43.510783   13764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 19:29:43.510840   13764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 19:29:43.551609   13764 cri.go:89] found id: ""
	I0708 19:29:43.551683   13764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 19:29:43.561883   13764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 19:29:43.571360   13764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 19:29:43.580944   13764 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 19:29:43.580961   13764 kubeadm.go:156] found existing configuration files:
	
	I0708 19:29:43.581002   13764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 19:29:43.589770   13764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 19:29:43.589841   13764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 19:29:43.599390   13764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 19:29:43.608385   13764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 19:29:43.608447   13764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 19:29:43.617730   13764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 19:29:43.626794   13764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 19:29:43.626882   13764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 19:29:43.636379   13764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 19:29:43.645756   13764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 19:29:43.645822   13764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 19:29:43.655441   13764 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 19:29:43.718531   13764 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 19:29:43.718604   13764 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 19:29:43.869615   13764 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 19:29:43.869763   13764 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 19:29:43.869937   13764 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 19:29:44.091062   13764 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 19:29:44.185423   13764 out.go:204]   - Generating certificates and keys ...
	I0708 19:29:44.185565   13764 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 19:29:44.185687   13764 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 19:29:44.201618   13764 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 19:29:44.408651   13764 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 19:29:44.478821   13764 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 19:29:44.672509   13764 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 19:29:44.746144   13764 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 19:29:44.746515   13764 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-268316 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0708 19:29:44.817129   13764 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 19:29:44.817499   13764 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-268316 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0708 19:29:45.096918   13764 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 19:29:45.321890   13764 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 19:29:45.527132   13764 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 19:29:45.527248   13764 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 19:29:45.625388   13764 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 19:29:45.730386   13764 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 19:29:45.868662   13764 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 19:29:46.198138   13764 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 19:29:46.501663   13764 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 19:29:46.502352   13764 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 19:29:46.506763   13764 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 19:29:46.508659   13764 out.go:204]   - Booting up control plane ...
	I0708 19:29:46.508773   13764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 19:29:46.508873   13764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 19:29:46.508958   13764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 19:29:46.524248   13764 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 19:29:46.524753   13764 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 19:29:46.524804   13764 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 19:29:46.665379   13764 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 19:29:46.665491   13764 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 19:29:47.666712   13764 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002053293s
	I0708 19:29:47.666831   13764 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 19:29:52.167692   13764 kubeadm.go:309] [api-check] The API server is healthy after 4.501941351s
	I0708 19:29:52.181266   13764 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 19:29:52.210031   13764 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 19:29:52.234842   13764 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 19:29:52.235049   13764 kubeadm.go:309] [mark-control-plane] Marking the node addons-268316 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 19:29:52.252683   13764 kubeadm.go:309] [bootstrap-token] Using token: j9x0og.fuvsuxwqklap1dd2
	I0708 19:29:52.254033   13764 out.go:204]   - Configuring RBAC rules ...
	I0708 19:29:52.254141   13764 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 19:29:52.259514   13764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 19:29:52.266997   13764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 19:29:52.273776   13764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 19:29:52.277201   13764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 19:29:52.280617   13764 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 19:29:52.571772   13764 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 19:29:53.015880   13764 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 19:29:53.571643   13764 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 19:29:53.572647   13764 kubeadm.go:309] 
	I0708 19:29:53.572717   13764 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 19:29:53.572729   13764 kubeadm.go:309] 
	I0708 19:29:53.572796   13764 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 19:29:53.572807   13764 kubeadm.go:309] 
	I0708 19:29:53.572855   13764 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 19:29:53.572922   13764 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 19:29:53.573002   13764 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 19:29:53.573012   13764 kubeadm.go:309] 
	I0708 19:29:53.573078   13764 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 19:29:53.573088   13764 kubeadm.go:309] 
	I0708 19:29:53.573152   13764 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 19:29:53.573162   13764 kubeadm.go:309] 
	I0708 19:29:53.573263   13764 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 19:29:53.573368   13764 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 19:29:53.573454   13764 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 19:29:53.573471   13764 kubeadm.go:309] 
	I0708 19:29:53.573602   13764 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 19:29:53.573729   13764 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 19:29:53.573742   13764 kubeadm.go:309] 
	I0708 19:29:53.573845   13764 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token j9x0og.fuvsuxwqklap1dd2 \
	I0708 19:29:53.574085   13764 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 19:29:53.574123   13764 kubeadm.go:309] 	--control-plane 
	I0708 19:29:53.574128   13764 kubeadm.go:309] 
	I0708 19:29:53.574255   13764 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 19:29:53.574267   13764 kubeadm.go:309] 
	I0708 19:29:53.574365   13764 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token j9x0og.fuvsuxwqklap1dd2 \
	I0708 19:29:53.574503   13764 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 19:29:53.574710   13764 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 19:29:53.574885   13764 cni.go:84] Creating CNI manager for ""
	I0708 19:29:53.574903   13764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 19:29:53.576607   13764 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 19:29:53.577805   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 19:29:53.588597   13764 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 19:29:53.609331   13764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 19:29:53.609417   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:53.609450   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-268316 minikube.k8s.io/updated_at=2024_07_08T19_29_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=addons-268316 minikube.k8s.io/primary=true
	I0708 19:29:53.649803   13764 ops.go:34] apiserver oom_adj: -16
	I0708 19:29:53.750857   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:54.251017   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:54.750978   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:55.251894   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:55.751043   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:56.250936   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:56.750970   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:57.250919   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:57.751708   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:58.251803   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:58.751801   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:59.251031   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:59.751248   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:00.251466   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:00.751647   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:01.251947   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:01.751244   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:02.251058   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:02.750974   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:03.251612   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:03.751849   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:04.250978   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:04.751172   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:05.251249   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:05.750936   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:06.251927   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:06.335293   13764 kubeadm.go:1107] duration metric: took 12.725930219s to wait for elevateKubeSystemPrivileges
	W0708 19:30:06.335336   13764 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 19:30:06.335346   13764 kubeadm.go:393] duration metric: took 22.824647888s to StartCluster
	I0708 19:30:06.335367   13764 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:30:06.335534   13764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:30:06.335874   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:30:06.336081   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 19:30:06.336099   13764 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0708 19:30:06.336081   13764 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:30:06.336209   13764 addons.go:69] Setting default-storageclass=true in profile "addons-268316"
	I0708 19:30:06.336237   13764 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-268316"
	I0708 19:30:06.336254   13764 addons.go:69] Setting metrics-server=true in profile "addons-268316"
	I0708 19:30:06.336297   13764 addons.go:234] Setting addon metrics-server=true in "addons-268316"
	I0708 19:30:06.336306   13764 addons.go:69] Setting helm-tiller=true in profile "addons-268316"
	I0708 19:30:06.336309   13764 addons.go:69] Setting ingress-dns=true in profile "addons-268316"
	I0708 19:30:06.336331   13764 addons.go:234] Setting addon ingress-dns=true in "addons-268316"
	I0708 19:30:06.336335   13764 addons.go:234] Setting addon helm-tiller=true in "addons-268316"
	I0708 19:30:06.336345   13764 addons.go:69] Setting storage-provisioner=true in profile "addons-268316"
	I0708 19:30:06.336358   13764 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-268316"
	I0708 19:30:06.336367   13764 addons.go:234] Setting addon storage-provisioner=true in "addons-268316"
	I0708 19:30:06.336369   13764 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-268316"
	I0708 19:30:06.336372   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336376   13764 addons.go:69] Setting volumesnapshots=true in profile "addons-268316"
	I0708 19:30:06.336384   13764 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-268316"
	I0708 19:30:06.336349   13764 addons.go:69] Setting volcano=true in profile "addons-268316"
	I0708 19:30:06.336399   13764 addons.go:234] Setting addon volumesnapshots=true in "addons-268316"
	I0708 19:30:06.336399   13764 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-268316"
	I0708 19:30:06.336408   13764 addons.go:69] Setting registry=true in profile "addons-268316"
	I0708 19:30:06.336428   13764 addons.go:234] Setting addon registry=true in "addons-268316"
	I0708 19:30:06.336434   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336451   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336359   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336400   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336670   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336704   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.336385   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336759   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336796   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.336799   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336286   13764 addons.go:69] Setting ingress=true in profile "addons-268316"
	I0708 19:30:06.336205   13764 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-268316"
	I0708 19:30:06.336816   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336824   13764 addons.go:234] Setting addon ingress=true in "addons-268316"
	I0708 19:30:06.336829   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.336841   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.336849   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336335   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336932   13764 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-268316"
	I0708 19:30:06.336325   13764 addons.go:69] Setting gcp-auth=true in profile "addons-268316"
	I0708 19:30:06.336970   13764 mustload.go:65] Loading cluster: addons-268316
	I0708 19:30:06.336994   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337003   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337023   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337031   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.336403   13764 addons.go:234] Setting addon volcano=true in "addons-268316"
	I0708 19:30:06.336200   13764 addons.go:69] Setting cloud-spanner=true in profile "addons-268316"
	I0708 19:30:06.336298   13764 config.go:182] Loaded profile config "addons-268316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:30:06.337081   13764 addons.go:234] Setting addon cloud-spanner=true in "addons-268316"
	I0708 19:30:06.337083   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336194   13764 addons.go:69] Setting yakd=true in profile "addons-268316"
	I0708 19:30:06.337101   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337105   13764 addons.go:234] Setting addon yakd=true in "addons-268316"
	I0708 19:30:06.337216   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336351   13764 addons.go:69] Setting inspektor-gadget=true in profile "addons-268316"
	I0708 19:30:06.337266   13764 addons.go:234] Setting addon inspektor-gadget=true in "addons-268316"
	I0708 19:30:06.337294   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.337312   13764 config.go:182] Loaded profile config "addons-268316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:30:06.337346   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337370   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337409   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.337626   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336799   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337669   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337684   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337713   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.337669   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337800   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337809   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337833   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337652   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337229   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337903   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337915   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.338033   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.338146   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.338175   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.339580   13764 out.go:177] * Verifying Kubernetes components...
	I0708 19:30:06.341606   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:30:06.357834   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45199
	I0708 19:30:06.357883   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I0708 19:30:06.357850   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I0708 19:30:06.358115   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39835
	I0708 19:30:06.358739   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0708 19:30:06.358845   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0708 19:30:06.358927   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.358976   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.359034   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.359057   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.359112   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.359169   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.359469   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.359486   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.359567   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.359578   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.359601   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.359612   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.359686   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.359697   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.359712   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.359722   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.359881   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.359919   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.359930   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.360062   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.360441   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.360469   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.360509   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.360531   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.363638   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.363712   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.363735   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.363751   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.363929   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.363951   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.364448   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.364490   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.364719   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.364749   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.366315   13764 addons.go:234] Setting addon default-storageclass=true in "addons-268316"
	I0708 19:30:06.366358   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.366695   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.366729   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.367023   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.367542   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.372476   13764 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-268316"
	I0708 19:30:06.372518   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.372866   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.372900   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.402888   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0708 19:30:06.403512   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.403599   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I0708 19:30:06.403953   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.404286   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.404308   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.404453   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.404464   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.404815   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.404924   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.405478   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.405519   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.405721   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.405798   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I0708 19:30:06.406638   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.408319   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.408739   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44639
	I0708 19:30:06.408999   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.409012   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.409067   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.409370   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.409966   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.409989   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.410199   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I0708 19:30:06.410329   13764 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.1
	I0708 19:30:06.410556   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.411058   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.411076   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.411396   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.411595   13764 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0708 19:30:06.411615   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0708 19:30:06.411634   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.411600   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.411689   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.411752   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.412002   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.412522   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.412563   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.413702   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0708 19:30:06.413703   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.414105   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.414136   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.414793   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I0708 19:30:06.414946   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0708 19:30:06.415254   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.415543   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.415907   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.415923   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.416204   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.416219   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.416288   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.416568   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.417125   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.417162   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.417245   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.417410   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.418118   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I0708 19:30:06.418504   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.418872   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.418898   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.419041   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.419181   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.419207   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.419477   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.419535   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.419964   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.419985   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.420058   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.420108   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.420263   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.420417   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.420415   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.420455   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.420417   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.421017   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.421051   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.423035   13764 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0708 19:30:06.424742   13764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0708 19:30:06.424762   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0708 19:30:06.424784   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.428563   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.428910   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.428933   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.429194   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.429417   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.429606   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.429766   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.431199   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0708 19:30:06.432259   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.433945   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46615
	I0708 19:30:06.434415   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.434524   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38963
	I0708 19:30:06.434882   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.435048   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.435060   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.435517   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.435707   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.436715   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.436733   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.437291   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.437345   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.438316   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.438354   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.438845   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.438878   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.439414   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.439577   13764 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0708 19:30:06.439662   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.441170   13764 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0708 19:30:06.441196   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0708 19:30:06.441217   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.441354   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.443137   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0708 19:30:06.443340   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0708 19:30:06.443827   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.444108   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.444626   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0708 19:30:06.444641   13764 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0708 19:30:06.444640   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.444670   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.444682   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.444687   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.444866   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.444986   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.445078   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.445605   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.445623   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.446406   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.447272   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.447298   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.448739   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.449104   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.449126   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.449811   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.450112   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.450296   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.450434   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.453155   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
	I0708 19:30:06.453613   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.453707   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0708 19:30:06.453794   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39479
	I0708 19:30:06.453918   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46453
	I0708 19:30:06.454572   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.454593   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.454662   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.455007   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.455019   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.455063   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.455389   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.455479   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.455657   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.455991   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.456001   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.456053   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I0708 19:30:06.456195   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39409
	I0708 19:30:06.456309   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.456340   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.456556   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.456646   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.456850   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.456911   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.457106   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.457126   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.457434   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.458961   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.459028   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.459041   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.459056   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.459080   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0708 19:30:06.459185   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.459195   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.459824   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.459993   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.460518   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.460551   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.461166   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.461228   13764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0708 19:30:06.461303   13764 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0708 19:30:06.461463   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.461589   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.461968   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.461983   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.462635   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.462668   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.463347   13764 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 19:30:06.463368   13764 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 19:30:06.463388   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.463446   13764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0708 19:30:06.463547   13764 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 19:30:06.464705   13764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0708 19:30:06.464845   13764 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 19:30:06.464860   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 19:30:06.464876   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.466240   13764 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0708 19:30:06.466260   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0708 19:30:06.466275   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.467713   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.468484   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.468505   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.468725   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.468888   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.469102   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.469117   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0708 19:30:06.469310   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.469720   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.470413   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.470429   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.470961   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.471236   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.471972   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.472523   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.472562   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.472978   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.473976   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.474018   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.474676   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.474700   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.474727   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.474746   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.474765   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.474843   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.474994   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.475142   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.475264   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.475471   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.475613   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.475726   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.476506   13764 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0708 19:30:06.477871   13764 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0708 19:30:06.477884   13764 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0708 19:30:06.477897   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.480523   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I0708 19:30:06.481008   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.481424   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.481571   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.481583   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.481985   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.482325   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.482333   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.482345   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.482984   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.483148   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.483280   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.483383   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0708 19:30:06.483522   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I0708 19:30:06.483654   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.483978   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.484213   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.484384   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.484396   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.484451   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.484771   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.484799   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.484857   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.485697   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.485714   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.485952   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.486697   13764 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0708 19:30:06.487557   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.487797   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I0708 19:30:06.488024   13764 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0708 19:30:06.488042   13764 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0708 19:30:06.488060   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.488263   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.488336   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.489215   13764 out.go:177]   - Using image docker.io/busybox:stable
	I0708 19:30:06.489324   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.489664   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.490093   13764 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0708 19:30:06.490147   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.490715   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.491421   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.492400   13764 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0708 19:30:06.492420   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0708 19:30:06.492437   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.492504   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.493165   13764 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0708 19:30:06.494092   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0708 19:30:06.494196   13764 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0708 19:30:06.494214   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0708 19:30:06.494230   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.494955   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.494989   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.495025   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.495182   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.495403   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.495595   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.495743   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.495888   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.496043   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.496142   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.496422   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.496644   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.496699   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0708 19:30:06.496770   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.497660   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.498238   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.498272   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.498451   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.498595   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.498727   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.498819   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.499236   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0708 19:30:06.500525   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0708 19:30:06.501767   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0708 19:30:06.503162   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	W0708 19:30:06.504006   13764 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34756->192.168.39.231:22: read: connection reset by peer
	I0708 19:30:06.504035   13764 retry.go:31] will retry after 149.850572ms: ssh: handshake failed: read tcp 192.168.39.1:34756->192.168.39.231:22: read: connection reset by peer
	I0708 19:30:06.504680   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0708 19:30:06.505032   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.505480   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.505499   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.505870   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.506034   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0708 19:30:06.506195   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.507914   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.508283   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0708 19:30:06.508349   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0708 19:30:06.509365   13764 out.go:177]   - Using image docker.io/registry:2.8.3
	I0708 19:30:06.509438   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0708 19:30:06.509453   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0708 19:30:06.509472   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.512138   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38287
	I0708 19:30:06.512290   13764 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0708 19:30:06.512871   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.513322   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.513364   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.513472   13764 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0708 19:30:06.513490   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0708 19:30:06.513508   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.513563   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.513706   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.513832   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.513925   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.517049   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.517418   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.517448   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.517589   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.517785   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.517920   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.518058   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.536016   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.536028   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.536545   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.536555   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.536566   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.536573   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.536891   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.536984   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.537107   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.537181   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.538810   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.538913   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.539074   13764 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 19:30:06.539087   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:06.539091   13764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 19:30:06.539107   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:06.539110   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.539266   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:06.539277   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:06.539287   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:06.539294   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:06.539466   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:06.539481   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	W0708 19:30:06.539575   13764 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0708 19:30:06.541925   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.542377   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.542401   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.542580   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.542768   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.542941   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.543076   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	W0708 19:30:06.546037   13764 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34786->192.168.39.231:22: read: connection reset by peer
	I0708 19:30:06.546065   13764 retry.go:31] will retry after 329.605991ms: ssh: handshake failed: read tcp 192.168.39.1:34786->192.168.39.231:22: read: connection reset by peer
	W0708 19:30:06.654795   13764 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34796->192.168.39.231:22: read: connection reset by peer
	I0708 19:30:06.654833   13764 retry.go:31] will retry after 494.30651ms: ssh: handshake failed: read tcp 192.168.39.1:34796->192.168.39.231:22: read: connection reset by peer
	I0708 19:30:06.844238   13764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:30:06.844458   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0708 19:30:06.856509   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0708 19:30:06.894179   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0708 19:30:06.894204   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0708 19:30:06.932368   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0708 19:30:06.983456   13764 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0708 19:30:06.983481   13764 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0708 19:30:07.067767   13764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0708 19:30:07.067791   13764 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0708 19:30:07.071690   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0708 19:30:07.076524   13764 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0708 19:30:07.076548   13764 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0708 19:30:07.091083   13764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 19:30:07.091101   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0708 19:30:07.098130   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 19:30:07.102426   13764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0708 19:30:07.102445   13764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0708 19:30:07.145261   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0708 19:30:07.153913   13764 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0708 19:30:07.153939   13764 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0708 19:30:07.171243   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0708 19:30:07.171268   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0708 19:30:07.186774   13764 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0708 19:30:07.186793   13764 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0708 19:30:07.246384   13764 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0708 19:30:07.246403   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0708 19:30:07.251512   13764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 19:30:07.251530   13764 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 19:30:07.306689   13764 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0708 19:30:07.306708   13764 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0708 19:30:07.311245   13764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0708 19:30:07.311262   13764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0708 19:30:07.312372   13764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0708 19:30:07.312387   13764 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0708 19:30:07.417363   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0708 19:30:07.417390   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0708 19:30:07.450888   13764 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0708 19:30:07.450919   13764 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0708 19:30:07.509759   13764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0708 19:30:07.509783   13764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0708 19:30:07.527049   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0708 19:30:07.550790   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0708 19:30:07.574174   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 19:30:07.587239   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0708 19:30:07.587268   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0708 19:30:07.627694   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0708 19:30:07.634258   13764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 19:30:07.634277   13764 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 19:30:07.659103   13764 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0708 19:30:07.659143   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0708 19:30:07.671612   13764 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0708 19:30:07.671635   13764 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0708 19:30:07.681561   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0708 19:30:07.681588   13764 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0708 19:30:07.752596   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0708 19:30:07.752620   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0708 19:30:07.789562   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 19:30:07.816130   13764 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0708 19:30:07.816158   13764 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0708 19:30:07.852905   13764 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 19:30:07.852927   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0708 19:30:07.959574   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0708 19:30:08.016334   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0708 19:30:08.016358   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0708 19:30:08.116108   13764 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0708 19:30:08.116135   13764 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0708 19:30:08.222279   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 19:30:08.309838   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0708 19:30:08.309873   13764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0708 19:30:08.466384   13764 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0708 19:30:08.466410   13764 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0708 19:30:08.574391   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0708 19:30:08.574422   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0708 19:30:08.860952   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0708 19:30:08.860974   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0708 19:30:08.877504   13764 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0708 19:30:08.877522   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0708 19:30:09.143815   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0708 19:30:09.143859   13764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0708 19:30:09.171978   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0708 19:30:09.390006   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0708 19:30:09.419915   13764 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.575427836s)
	I0708 19:30:09.419951   13764 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0708 19:30:09.419955   13764 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.575676531s)
	I0708 19:30:09.420010   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.563444318s)
	I0708 19:30:09.420055   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:09.420081   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:09.420391   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:09.420407   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:09.420417   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:09.420425   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:09.420445   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:09.420895   13764 node_ready.go:35] waiting up to 6m0s for node "addons-268316" to be "Ready" ...
	I0708 19:30:09.421067   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:09.421069   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:09.421087   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:09.425119   13764 node_ready.go:49] node "addons-268316" has status "Ready":"True"
	I0708 19:30:09.425143   13764 node_ready.go:38] duration metric: took 4.234104ms for node "addons-268316" to be "Ready" ...
	I0708 19:30:09.425155   13764 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:30:09.442068   13764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:09.985500   13764 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-268316" context rescaled to 1 replicas
	I0708 19:30:10.789661   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.85725085s)
	I0708 19:30:10.789716   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:10.789729   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:10.790043   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:10.790084   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:10.790108   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:10.790122   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:10.790391   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:10.790450   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:10.790464   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:11.702987   13764 pod_ready.go:102] pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:13.483801   13764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0708 19:30:13.483837   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:13.486850   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:13.487287   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:13.487319   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:13.487522   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:13.487708   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:13.487849   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:13.488014   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:13.898299   13764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0708 19:30:13.952410   13764 pod_ready.go:102] pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:14.159311   13764 addons.go:234] Setting addon gcp-auth=true in "addons-268316"
	I0708 19:30:14.159373   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:14.159830   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:14.159876   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:14.175204   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0708 19:30:14.175677   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:14.176125   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:14.176147   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:14.176469   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:14.176930   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:14.176953   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:14.192542   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0708 19:30:14.192914   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:14.193371   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:14.193391   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:14.193695   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:14.193912   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:14.195489   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:14.195710   13764 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0708 19:30:14.195739   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:14.198470   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:14.198871   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:14.198899   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:14.199009   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:14.199163   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:14.199289   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:14.199420   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:15.874006   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.802280101s)
	I0708 19:30:15.874061   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874073   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874101   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.775948161s)
	I0708 19:30:15.874126   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874137   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874147   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.728857792s)
	I0708 19:30:15.874190   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874203   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.347123523s)
	I0708 19:30:15.874231   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874243   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874257   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.323424151s)
	I0708 19:30:15.874276   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874208   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874286   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874365   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.246650808s)
	I0708 19:30:15.874382   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874390   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874413   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874427   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874438   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874446   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874558   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.300357136s)
	I0708 19:30:15.874609   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874631   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874482   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.084892969s)
	I0708 19:30:15.874610   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874694   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874711   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874738   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874637   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.874664   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874787   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.874795   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874808   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874817   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874832   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874847   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874863   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874873   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.652556056s)
	I0708 19:30:15.874880   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874714   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874821   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874966   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874716   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.875005   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.875014   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874755   13764 addons.go:475] Verifying addon ingress=true in "addons-268316"
	I0708 19:30:15.875170   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.703037869s)
	I0708 19:30:15.875196   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.875204   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.875280   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.875305   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.875312   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.875321   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.875328   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.875330   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.875338   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.875340   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.876386   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.876424   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.876431   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.876439   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.876446   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.876518   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.876537   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.876543   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.876551   13764 addons.go:475] Verifying addon registry=true in "addons-268316"
	I0708 19:30:15.876899   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.876930   13764 out.go:177] * Verifying ingress addon...
	I0708 19:30:15.876962   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.876990   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.876998   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874760   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.915151961s)
	I0708 19:30:15.877728   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.877739   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874774   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.877869   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.877879   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.878246   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.878274   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.878281   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.878288   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.878296   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.878325   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.878347   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.878365   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.878371   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.878378   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.878385   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.878467   13764 out.go:177] * Verifying registry addon...
	I0708 19:30:15.879407   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.879439   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.879460   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.879623   13764 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0708 19:30:15.879706   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.879732   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.879750   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874689   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.874909   13764 main.go:141] libmachine: Making call to close driver server
	W0708 19:30:15.874907   13764 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0708 19:30:15.879803   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.879808   13764 retry.go:31] will retry after 305.52208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0708 19:30:15.876940   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.879857   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.879908   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.880122   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.880136   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.880144   13764 addons.go:475] Verifying addon metrics-server=true in "addons-268316"
	I0708 19:30:15.880699   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.880713   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.880892   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.880929   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.880946   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.881749   13764 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-268316 service yakd-dashboard -n yakd-dashboard
	
	I0708 19:30:15.882519   13764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0708 19:30:15.897086   13764 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0708 19:30:15.897105   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:15.937341   13764 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0708 19:30:15.937377   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:15.938383   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.938408   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.938663   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.938680   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.938696   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	W0708 19:30:15.938783   13764 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0708 19:30:15.981122   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.981155   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.981499   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.981518   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.983659   13764 pod_ready.go:102] pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:16.185977   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 19:30:16.397755   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:16.397788   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:16.759030   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.368962914s)
	I0708 19:30:16.759108   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:16.759123   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:16.759050   13764 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.563318348s)
	I0708 19:30:16.759400   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:16.759432   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:16.759421   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:16.759461   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:16.759497   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:16.759848   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:16.759862   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:16.759873   13764 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-268316"
	I0708 19:30:16.759892   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:16.761571   13764 out.go:177] * Verifying csi-hostpath-driver addon...
	I0708 19:30:16.761590   13764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0708 19:30:16.763327   13764 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0708 19:30:16.764141   13764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0708 19:30:16.764855   13764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0708 19:30:16.764876   13764 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0708 19:30:16.796977   13764 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0708 19:30:16.797006   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:16.883932   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:16.884626   13764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0708 19:30:16.884643   13764 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0708 19:30:16.890340   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:17.064648   13764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0708 19:30:17.064668   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0708 19:30:17.177275   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0708 19:30:17.290265   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:17.387867   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:17.389905   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:17.772328   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:17.884561   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:17.890751   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:18.271932   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:18.385175   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:18.387368   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:18.448889   13764 pod_ready.go:102] pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:18.621925   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.435897584s)
	I0708 19:30:18.621983   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:18.622006   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:18.622263   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:18.622305   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:18.622312   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:18.622326   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:18.622334   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:18.622538   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:18.622562   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:18.772197   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:18.894538   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:18.894609   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:19.145512   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.968197321s)
	I0708 19:30:19.145556   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:19.145567   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:19.145867   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:19.145911   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:19.145921   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:19.145927   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:19.145949   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:19.146184   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:19.146239   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:19.148004   13764 addons.go:475] Verifying addon gcp-auth=true in "addons-268316"
	I0708 19:30:19.149950   13764 out.go:177] * Verifying gcp-auth addon...
	I0708 19:30:19.152398   13764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0708 19:30:19.182156   13764 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0708 19:30:19.182183   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:19.274665   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:19.385436   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:19.391157   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:19.655876   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:19.771898   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:19.885381   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:19.888263   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:19.952566   13764 pod_ready.go:97] pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.231 HostIPs:[{IP:192.168.39
.231}] PodIP: PodIPs:[] StartTime:2024-07-08 19:30:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-08 19:30:11 +0000 UTC,FinishedAt:2024-07-08 19:30:17 +0000 UTC,ContainerID:cri-o://0b33c0f3815deb48a10cce59e4433578640eb5f7f7f542bdfe746620d3c992ae,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://0b33c0f3815deb48a10cce59e4433578640eb5f7f7f542bdfe746620d3c992ae Started:0xc0025429a0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0708 19:30:19.952598   13764 pod_ready.go:81] duration metric: took 10.510498559s for pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace to be "Ready" ...
	E0708 19:30:19.952612   13764 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.231 HostIPs:[{IP:192.168.39.231}] PodIP: PodIPs:[] StartTime:2024-07-08 19:30:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-08 19:30:11 +0000 UTC,FinishedAt:2024-07-08 19:30:17 +0000 UTC,ContainerID:cri-o://0b33c0f3815deb48a10cce59e4433578640eb5f7f7f542bdfe746620d3c992ae,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://0b33c0f3815deb48a10cce59e4433578640eb5f7f7f542bdfe746620d3c992ae Started:0xc0025429a0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0708 19:30:19.952621   13764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mdmnx" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.964207   13764 pod_ready.go:92] pod "coredns-7db6d8ff4d-mdmnx" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:19.964229   13764 pod_ready.go:81] duration metric: took 11.599292ms for pod "coredns-7db6d8ff4d-mdmnx" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.964243   13764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.974307   13764 pod_ready.go:92] pod "etcd-addons-268316" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:19.974335   13764 pod_ready.go:81] duration metric: took 10.083616ms for pod "etcd-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.974350   13764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.981161   13764 pod_ready.go:92] pod "kube-apiserver-addons-268316" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:19.981179   13764 pod_ready.go:81] duration metric: took 6.820418ms for pod "kube-apiserver-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.981190   13764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.987268   13764 pod_ready.go:92] pod "kube-controller-manager-addons-268316" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:19.987285   13764 pod_ready.go:81] duration metric: took 6.087748ms for pod "kube-controller-manager-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.987296   13764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7plgc" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:20.158147   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:20.270318   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:20.347381   13764 pod_ready.go:92] pod "kube-proxy-7plgc" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:20.347415   13764 pod_ready.go:81] duration metric: took 360.111234ms for pod "kube-proxy-7plgc" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:20.347430   13764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:20.385071   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:20.392739   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:20.657344   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:20.745992   13764 pod_ready.go:92] pod "kube-scheduler-addons-268316" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:20.746024   13764 pod_ready.go:81] duration metric: took 398.58436ms for pod "kube-scheduler-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:20.746037   13764 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-s4n9d" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:20.772660   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:20.884153   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:20.886940   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:21.157466   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:21.269235   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:21.384480   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:21.395529   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:21.656534   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:21.769504   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:21.883565   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:21.886330   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:22.156134   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:22.270016   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:22.384218   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:22.387270   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:22.656642   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:22.753366   13764 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-s4n9d" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:22.769618   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:22.886014   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:22.887815   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:23.158198   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:23.269971   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:23.385720   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:23.387839   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:23.744086   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:23.770775   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:23.884436   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:23.888569   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:24.155773   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:24.270158   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:24.384960   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:24.388616   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:24.655910   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:24.769803   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:24.884135   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:24.888392   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:25.157144   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:25.253339   13764 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-s4n9d" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:25.273789   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:25.385779   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:25.387229   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:25.656585   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:25.762163   13764 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-s4n9d" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:25.762184   13764 pod_ready.go:81] duration metric: took 5.016139352s for pod "nvidia-device-plugin-daemonset-s4n9d" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:25.762191   13764 pod_ready.go:38] duration metric: took 16.337024941s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:30:25.762204   13764 api_server.go:52] waiting for apiserver process to appear ...
	I0708 19:30:25.762264   13764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 19:30:25.772495   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:25.781026   13764 api_server.go:72] duration metric: took 19.444844627s to wait for apiserver process to appear ...
	I0708 19:30:25.781051   13764 api_server.go:88] waiting for apiserver healthz status ...
	I0708 19:30:25.781071   13764 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I0708 19:30:25.785956   13764 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I0708 19:30:25.786820   13764 api_server.go:141] control plane version: v1.30.2
	I0708 19:30:25.786850   13764 api_server.go:131] duration metric: took 5.79073ms to wait for apiserver health ...
	I0708 19:30:25.786868   13764 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 19:30:25.796582   13764 system_pods.go:59] 18 kube-system pods found
	I0708 19:30:25.796605   13764 system_pods.go:61] "coredns-7db6d8ff4d-mdmnx" [e8790295-025f-492c-8527-b45580989758] Running
	I0708 19:30:25.796612   13764 system_pods.go:61] "csi-hostpath-attacher-0" [f1542e8d-b696-41e6-8d98-c47563e0d4f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0708 19:30:25.796618   13764 system_pods.go:61] "csi-hostpath-resizer-0" [dce4942c-24f6-4da5-b501-5b8577368aa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0708 19:30:25.796626   13764 system_pods.go:61] "csi-hostpathplugin-wsvcv" [26bd046a-4a16-4a94-aa7e-09f3b7b7c6c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0708 19:30:25.796631   13764 system_pods.go:61] "etcd-addons-268316" [88c9169d-a21e-4479-9e15-38a9161b26ef] Running
	I0708 19:30:25.796635   13764 system_pods.go:61] "kube-apiserver-addons-268316" [be0113de-6c81-41f3-bd33-98d5f4c07b95] Running
	I0708 19:30:25.796639   13764 system_pods.go:61] "kube-controller-manager-addons-268316" [bcc97d95-de10-4126-86cd-0e60ca3ce913] Running
	I0708 19:30:25.796644   13764 system_pods.go:61] "kube-ingress-dns-minikube" [f5f48486-6578-4b7c-ab34-56de96be0694] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0708 19:30:25.796651   13764 system_pods.go:61] "kube-proxy-7plgc" [4dcd9909-5fdf-4a54-a66c-12498b65c28f] Running
	I0708 19:30:25.796655   13764 system_pods.go:61] "kube-scheduler-addons-268316" [12fedcd0-6554-4acf-9293-619280507622] Running
	I0708 19:30:25.796660   13764 system_pods.go:61] "metrics-server-c59844bb4-c6gzl" [fa5607f8-de0f-4bb1-b219-54ef33238b21] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 19:30:25.796666   13764 system_pods.go:61] "nvidia-device-plugin-daemonset-s4n9d" [bd2137b3-9f97-4991-91e6-20ab23e68c75] Running
	I0708 19:30:25.796672   13764 system_pods.go:61] "registry-g8hs8" [36f4018c-5097-47ad-b3e0-a8a225032ab3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0708 19:30:25.796676   13764 system_pods.go:61] "registry-proxy-rrxb2" [ebfad772-c807-408a-81ef-0f5d1ad1b929] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0708 19:30:25.796691   13764 system_pods.go:61] "snapshot-controller-745499f584-s2fn5" [e3414fea-8eee-4787-b9c1-70ada7ae04cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0708 19:30:25.796701   13764 system_pods.go:61] "snapshot-controller-745499f584-skqf6" [7af3eb18-da85-4dce-bbed-84f62a78d232] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0708 19:30:25.796705   13764 system_pods.go:61] "storage-provisioner" [3a22fea0-2e74-4b1d-8943-4009c3bae190] Running
	I0708 19:30:25.796710   13764 system_pods.go:61] "tiller-deploy-6677d64bcd-lmtgw" [785aba76-863a-4bd2-a24f-c7eaa42f49b4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0708 19:30:25.796716   13764 system_pods.go:74] duration metric: took 9.842333ms to wait for pod list to return data ...
	I0708 19:30:25.796726   13764 default_sa.go:34] waiting for default service account to be created ...
	I0708 19:30:25.798545   13764 default_sa.go:45] found service account: "default"
	I0708 19:30:25.798562   13764 default_sa.go:55] duration metric: took 1.83091ms for default service account to be created ...
	I0708 19:30:25.798569   13764 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 19:30:25.808104   13764 system_pods.go:86] 18 kube-system pods found
	I0708 19:30:25.808126   13764 system_pods.go:89] "coredns-7db6d8ff4d-mdmnx" [e8790295-025f-492c-8527-b45580989758] Running
	I0708 19:30:25.808133   13764 system_pods.go:89] "csi-hostpath-attacher-0" [f1542e8d-b696-41e6-8d98-c47563e0d4f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0708 19:30:25.808139   13764 system_pods.go:89] "csi-hostpath-resizer-0" [dce4942c-24f6-4da5-b501-5b8577368aa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0708 19:30:25.808146   13764 system_pods.go:89] "csi-hostpathplugin-wsvcv" [26bd046a-4a16-4a94-aa7e-09f3b7b7c6c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0708 19:30:25.808152   13764 system_pods.go:89] "etcd-addons-268316" [88c9169d-a21e-4479-9e15-38a9161b26ef] Running
	I0708 19:30:25.808157   13764 system_pods.go:89] "kube-apiserver-addons-268316" [be0113de-6c81-41f3-bd33-98d5f4c07b95] Running
	I0708 19:30:25.808164   13764 system_pods.go:89] "kube-controller-manager-addons-268316" [bcc97d95-de10-4126-86cd-0e60ca3ce913] Running
	I0708 19:30:25.808176   13764 system_pods.go:89] "kube-ingress-dns-minikube" [f5f48486-6578-4b7c-ab34-56de96be0694] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0708 19:30:25.808187   13764 system_pods.go:89] "kube-proxy-7plgc" [4dcd9909-5fdf-4a54-a66c-12498b65c28f] Running
	I0708 19:30:25.808194   13764 system_pods.go:89] "kube-scheduler-addons-268316" [12fedcd0-6554-4acf-9293-619280507622] Running
	I0708 19:30:25.808203   13764 system_pods.go:89] "metrics-server-c59844bb4-c6gzl" [fa5607f8-de0f-4bb1-b219-54ef33238b21] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 19:30:25.808210   13764 system_pods.go:89] "nvidia-device-plugin-daemonset-s4n9d" [bd2137b3-9f97-4991-91e6-20ab23e68c75] Running
	I0708 19:30:25.808216   13764 system_pods.go:89] "registry-g8hs8" [36f4018c-5097-47ad-b3e0-a8a225032ab3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0708 19:30:25.808223   13764 system_pods.go:89] "registry-proxy-rrxb2" [ebfad772-c807-408a-81ef-0f5d1ad1b929] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0708 19:30:25.808232   13764 system_pods.go:89] "snapshot-controller-745499f584-s2fn5" [e3414fea-8eee-4787-b9c1-70ada7ae04cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0708 19:30:25.808239   13764 system_pods.go:89] "snapshot-controller-745499f584-skqf6" [7af3eb18-da85-4dce-bbed-84f62a78d232] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0708 19:30:25.808247   13764 system_pods.go:89] "storage-provisioner" [3a22fea0-2e74-4b1d-8943-4009c3bae190] Running
	I0708 19:30:25.808257   13764 system_pods.go:89] "tiller-deploy-6677d64bcd-lmtgw" [785aba76-863a-4bd2-a24f-c7eaa42f49b4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0708 19:30:25.808265   13764 system_pods.go:126] duration metric: took 9.691784ms to wait for k8s-apps to be running ...
	I0708 19:30:25.808273   13764 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 19:30:25.808312   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 19:30:25.823384   13764 system_svc.go:56] duration metric: took 15.100465ms WaitForService to wait for kubelet
	I0708 19:30:25.823420   13764 kubeadm.go:576] duration metric: took 19.487243061s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 19:30:25.823445   13764 node_conditions.go:102] verifying NodePressure condition ...
	I0708 19:30:25.885912   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:25.890599   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:25.947614   13764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:30:25.947639   13764 node_conditions.go:123] node cpu capacity is 2
	I0708 19:30:25.947650   13764 node_conditions.go:105] duration metric: took 124.188481ms to run NodePressure ...
	I0708 19:30:25.947661   13764 start.go:240] waiting for startup goroutines ...
	I0708 19:30:26.156149   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:26.275957   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:26.384290   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:26.389704   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:26.788097   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:26.788457   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:26.884188   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:26.886856   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:27.156131   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:27.271089   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:27.384315   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:27.387703   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:27.656587   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:27.769742   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:27.883721   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:27.887193   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:28.156087   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:28.272646   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:28.383926   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:28.386631   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:28.656100   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:28.773416   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:28.884461   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:28.887854   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:29.156817   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:29.270866   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:29.384528   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:29.388628   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:29.656020   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:29.773644   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:29.884116   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:29.888046   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:30.156031   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:30.270303   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:30.385896   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:30.388477   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:30.657019   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:30.770326   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:30.883645   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:30.887050   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:31.156203   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:31.270773   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:31.383564   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:31.386473   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:31.658903   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:31.776015   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:31.885161   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:31.888341   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:32.156210   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:32.272653   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:32.391893   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:32.399702   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:32.657035   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:32.770419   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:32.885594   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:32.891756   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:33.157655   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:33.273833   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:33.384300   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:33.386872   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:33.656935   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:33.772144   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:33.884376   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:33.887225   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:34.156191   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:34.270151   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:34.384777   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:34.387413   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:34.656689   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:34.769915   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:34.884056   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:34.887639   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:35.157515   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:35.269526   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:35.386142   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:35.400084   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:35.656139   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:35.769959   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:35.884185   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:35.886691   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:36.156190   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:36.269587   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:36.384164   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:36.386784   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:36.656585   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:36.769328   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:36.883655   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:36.885923   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:37.156046   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:37.270042   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:37.384565   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:37.386760   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:37.657700   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:37.769503   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:37.885155   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:37.887730   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:38.156733   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:38.272188   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:38.384709   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:38.389139   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:38.656712   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:38.769636   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:38.883905   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:38.889926   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:39.157006   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:39.270402   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:39.384348   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:39.386702   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:39.658286   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:39.771604   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:39.884257   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:39.887645   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:40.155982   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:40.270073   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:40.384015   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:40.386886   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:40.655786   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:40.770291   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:40.884393   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:40.886894   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:41.157757   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:41.270196   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:41.384873   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:41.388312   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:41.656403   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:41.770726   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:42.298393   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:42.308638   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:42.308965   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:42.309046   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:42.384715   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:42.388715   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:42.658237   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:42.771113   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:42.885055   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:42.887520   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:43.157401   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:43.279630   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:43.390052   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:43.391668   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:43.656093   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:43.769921   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:43.884414   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:43.887300   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:44.156314   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:44.270102   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:44.384235   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:44.387800   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:44.657212   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:44.769895   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:44.884270   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:44.887256   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:45.157123   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:45.270710   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:45.385191   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:45.387645   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:45.658270   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:45.770313   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:45.884394   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:45.887548   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:46.156628   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:46.269687   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:46.385075   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:46.390999   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:46.764732   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:46.772397   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:46.884187   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:46.889679   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:47.156227   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:47.270432   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:47.384627   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:47.388291   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:47.656352   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:47.769330   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:47.884000   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:47.895906   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:48.157217   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:48.270934   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:48.384543   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:48.388060   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:48.656536   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:48.769979   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:48.884258   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:48.887201   13764 kapi.go:107] duration metric: took 33.004679515s to wait for kubernetes.io/minikube-addons=registry ...
	I0708 19:30:49.156325   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:49.271129   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:49.384708   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:49.656661   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:49.769494   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:49.883797   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:50.156533   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:50.271552   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:50.807904   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:50.812109   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:50.812558   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:50.884405   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:51.157128   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:51.270360   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:51.384468   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:51.656325   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:51.773422   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:51.884854   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:52.158700   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:52.271198   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:52.384958   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:52.656690   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:52.770983   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:52.883776   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:53.155793   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:53.269881   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:53.384453   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:53.656589   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:53.769677   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:53.883783   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:54.156456   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:54.270819   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:54.384556   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:54.656509   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:54.770449   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:54.884691   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:55.157309   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:55.270294   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:55.384024   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:55.752030   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:55.781951   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:55.884359   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:56.156465   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:56.269498   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:56.384205   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:56.656304   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:56.772072   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:57.157085   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:57.157418   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:57.270609   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:57.384803   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:57.656776   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:57.771891   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:57.884421   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:58.155820   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:58.269802   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:58.384744   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:58.656535   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:58.769390   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:58.885632   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:59.158183   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:59.271283   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:59.386444   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:59.658882   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:59.771120   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:59.884118   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:00.156707   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:00.269925   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:00.385287   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:00.656683   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:00.770058   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:00.886731   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:01.156656   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:01.269723   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:01.384905   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:01.655845   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:01.769891   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:01.883172   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:02.156276   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:02.270399   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:02.384817   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:02.726018   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:02.848798   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:02.885509   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:03.161258   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:03.270733   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:03.386929   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:03.658195   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:03.769719   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:03.889449   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:04.155788   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:04.269871   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:04.384425   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:04.656882   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:04.769809   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:04.889445   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:05.156210   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:05.273112   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:05.384168   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:05.656366   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:05.771626   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:05.884488   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:06.156983   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:06.272548   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:06.385551   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:06.656047   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:06.782823   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:06.889006   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:07.155882   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:07.271092   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:07.392574   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:07.657466   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:07.775709   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:07.887627   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:08.159404   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:08.269466   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:08.385824   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:08.656899   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:08.770024   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:08.885480   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:09.157120   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:09.270472   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:09.384852   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:09.662447   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:09.770592   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:09.884990   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:10.156501   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:10.269830   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:10.383964   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:10.656249   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:10.771880   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:10.884668   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:11.157177   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:11.270951   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:11.384360   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:11.657510   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:11.769757   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:11.884547   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:12.156420   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:12.270183   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:12.384981   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:12.657499   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:12.769955   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:12.883559   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:13.155982   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:13.270067   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:13.384794   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:13.656915   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:13.770552   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:13.885551   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:14.157489   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:14.269755   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:14.388648   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:15.000574   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:15.000805   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:15.001595   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:15.157028   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:15.270755   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:15.385812   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:15.656274   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:15.770649   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:15.884626   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:16.163615   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:16.269642   13764 kapi.go:107] duration metric: took 59.50549752s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0708 19:31:16.384728   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:16.656842   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:16.884601   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:17.156870   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:17.384551   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:17.656680   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:17.884106   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:18.157778   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:18.384108   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:18.656556   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:18.886096   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:19.160835   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:19.384579   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:19.657234   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:19.884491   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:20.156697   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:20.385301   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:20.656426   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:20.885723   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:21.157531   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:21.386726   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:21.656337   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:21.884912   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:22.155507   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:22.384938   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:22.655690   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:22.884207   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:23.156641   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:23.384950   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:23.656840   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:23.884069   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:24.157051   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:24.384731   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:24.656987   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:24.885400   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:25.400809   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:25.404808   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:25.655792   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:25.884064   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:26.156702   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:26.385350   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:26.656501   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:26.885112   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:27.158985   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:27.392981   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:27.670044   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:27.886066   13764 kapi.go:107] duration metric: took 1m12.006440672s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0708 19:31:28.157332   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:28.695753   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:29.158456   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:29.660563   13764 kapi.go:107] duration metric: took 1m10.508162614s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0708 19:31:29.662136   13764 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-268316 cluster.
	I0708 19:31:29.663631   13764 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0708 19:31:29.665306   13764 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0708 19:31:29.666724   13764 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, helm-tiller, storage-provisioner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0708 19:31:29.667996   13764 addons.go:510] duration metric: took 1m23.331896601s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner helm-tiller storage-provisioner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0708 19:31:29.668030   13764 start.go:245] waiting for cluster config update ...
	I0708 19:31:29.668047   13764 start.go:254] writing updated cluster config ...
	I0708 19:31:29.668279   13764 ssh_runner.go:195] Run: rm -f paused
	I0708 19:31:29.721898   13764 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 19:31:29.723824   13764 out.go:177] * Done! kubectl is now configured to use "addons-268316" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.004498151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720467256004471326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587821,},InodesUsed:&UInt64Value{Value:204,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2b2d5f7-c790-4470-a5ae-3524ff8c6294 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.005094351Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cba7dc7-54ae-4a47-bbec-864f26de387c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.005202079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cba7dc7-54ae-4a47-bbec-864f26de387c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.005656498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f04837ef2d2a591237579fa868e9a2aef2dd5b55f6ca0f9e4216d0f9a5a77cb,PodSandboxId:8b3b8135d419631ff3173aa315156556e130137c4f9028add8ea5b0254fe418a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720467247927762236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-lznqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db22bb68-894a-454b-a1d2-9410d39a9528,},Annotations:map[string]string{io.kubernetes.container.hash: 3b742db2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ac69521142a6e35cd53b6146de1f860720de3a3b9d912255bd3b66a9ef1aa9,PodSandboxId:81bb11f417f17a79ce947d7ce9f7acc952bd3a5e0a0ee55786cd608bca00bdc0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720467109169255473,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5771cdad-38eb-4b69-9d82-5a58ef2c2f4e,},Annotations:map[string]string{io.kubern
etes.container.hash: 92c93ea5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa89db705add5299b6512650662be261a1b54171a36defda0febaa4d76b7719,PodSandboxId:6221ab3e632e79d5d9bc777c45be85aa4398f095df5d16085e097688153d9fc6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720467097497177949,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-cgkpr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61b3fef5-b549-4aab-a5f7-da35eb3d4477,},Annotations:map[string]string{io.kubernetes.container.hash: fd1fb148,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15069a7b3f50f8b733f6b841313e7a8a53493fde2473f0d6937d3d42cdb19b58,PodSandboxId:a475af66e07627f5d7be099005a460014744a7e5e962deff973069a4ddf3ee6b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720467088955878112,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-gtf45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 60da309c-ad4b-4388-aa45-131c4fb0f4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 12f75852,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c1dd586ce67e6238a1cfaffc3490bd72a604cdd37589b6fc143c48bbe669bb,PodSandboxId:df2322cd913fef3666747a84f57a3c7bbb976ea99bf9cfcb9d54992f63072298,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1720467061183925188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c4fls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4812f288-e0bc-4f79-9497-3c911d963eb1,},Annotations:map[string]string{io.kubernetes.container.hash: dbc35b48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648cabbd1c23d6e1cb4e2fe82a58559d0355fd3fc4814fb0faab0e47b04c08a6,PodSandboxId:521925d164457282c8ac32ca8935b1cf4e38efcc19423b481fdb6eca95348a6e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1720467061061876992,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7d749,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a592f04-8095-4e81-befe-9bb48c44e466,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac2730b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db2932e06f7be72942ef239e31d1031ce07694c0eb50c48426a91525fc5997b,PodSandboxId:7b106f06e44b15bc52775874e37735172477625277d973d9f8e510aa5a0f5007,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978f
bf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1720467058658406337,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rf6p2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3ac6741a-bec9-4f29-a6eb-c73c7500970b,},Annotations:map[string]string{io.kubernetes.container.hash: a6c15013,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15a419524494fe4cac639c22abd343bc586a2b8dacee4ba44e05b64a982534b,PodSandboxId:494772db18f3ff4a6eed10b94a087e898e932f0db0dd5abca014a0e933a95851,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1720467052855153288,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-nqm94,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 894303f6-0b3f-451e-8b4c-a1269b70c68f,},Annotations:map[string]string{io.kubernetes.container.hash: b37e3ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d,PodSandboxId:68cc01146add074afc7474a39a65cf3f67d5159accedf923d725dfb2b979aa44,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720467042432339060,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c6gzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5607f8-de0f-4bb1-b219-54ef33238b21,},Annotations:map[string]string{io.kubernetes.container.hash: 5e9677a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0486a262195e25e0cbcb85c7f856a35300a55c800deabb7b3cea1c342fb270,PodSandboxId:cab7dfbbaf216814b4579d7313dd71505a5
e81c4b09c6bf1abec9adf853bd02d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467014260979278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a22fea0-2e74-4b1d-8943-4009c3bae190,},Annotations:map[string]string{io.kubernetes.container.hash: 2544e9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa46f641fce3a59d88fe88837a4c8c08f7e4447206bc8e44e12b9f4f5079abef,PodSandboxId:33f84dfe9bc8103d8a4d8447c3cb88183ca9f280e28de2f9
2203373b2195c63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467010485771198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdmnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8790295-025f-492c-8527-b45580989758,},Annotations:map[string]string{io.kubernetes.container.hash: 933e8636,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fc1829105fd93b0c9eef5eaf11f30232d42efabb4cb4130c54a76a96ddbd82,PodSandboxId:36132fbfe93b031bbb4a7915d682454b32a82ec148af0e191ae9410b8818414d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467007796499322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7plgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcd9909-5fdf-4a54-a66c-12498b65c28f,},Annotations:map[string]string{io.kubernetes.container.hash: 683e9680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:9d49be99483f5c15756481dec1f198cbd8e9da87539ae5759ec447421c2bf138,PodSandboxId:717561d56daf2914143b08bb1f10bf41c455065ce54ea0b073b734843dc7684e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720466987871363903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c73b77c9e8c067af0478499956a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:1a92a99f73b4cef445e51d38c9c94905a53d179bb9954413a5a15d3c7b803b46,PodSandboxId:329650aaf1bc3112b2f246746ca0fdbb0bcf8fde6ea8df7451a3998bfc1a8642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720466987910144817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a0cfbd4519e6880ca99be18bf725eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:e35d37ebf78b3809e33fc570ccdc8fa7d7a0fd4dcb658545c70675d77960f080,PodSandboxId:ffe430fd6cb316055ec66677e7e183a3803757ae260eb7eb9ebc754295c738be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720466987873699629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf9d34116c191cb68773ad425a33b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9332ffa119798ff8821e289f7966df0d8310e8c1a67d1304c5ba54479752c901,PodSandboxId:3b2f473211f40a9fd72f56007b83165481808f94dd45efcb518930f575189497,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720466987808420964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4feb2225d826d58f607b166f558fd389,},Annotations:map[string]string{io.kubernetes.container.hash: 73820d47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4
cba7dc7-54ae-4a47-bbec-864f26de387c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.045424735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e2028a9-d56a-4b84-89b3-97541fbe0e45 name=/runtime.v1.RuntimeService/Version
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.045681842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e2028a9-d56a-4b84-89b3-97541fbe0e45 name=/runtime.v1.RuntimeService/Version
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.046806142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08fc7f95-2d34-412d-8f54-4972c4772d15 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.048136063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720467256048107789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587821,},InodesUsed:&UInt64Value{Value:204,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08fc7f95-2d34-412d-8f54-4972c4772d15 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.048643985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf4ecff9-ab02-403e-bfbb-4c919c987ab8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.048717440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf4ecff9-ab02-403e-bfbb-4c919c987ab8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.049163949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f04837ef2d2a591237579fa868e9a2aef2dd5b55f6ca0f9e4216d0f9a5a77cb,PodSandboxId:8b3b8135d419631ff3173aa315156556e130137c4f9028add8ea5b0254fe418a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720467247927762236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-lznqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db22bb68-894a-454b-a1d2-9410d39a9528,},Annotations:map[string]string{io.kubernetes.container.hash: 3b742db2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ac69521142a6e35cd53b6146de1f860720de3a3b9d912255bd3b66a9ef1aa9,PodSandboxId:81bb11f417f17a79ce947d7ce9f7acc952bd3a5e0a0ee55786cd608bca00bdc0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720467109169255473,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5771cdad-38eb-4b69-9d82-5a58ef2c2f4e,},Annotations:map[string]string{io.kubern
etes.container.hash: 92c93ea5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa89db705add5299b6512650662be261a1b54171a36defda0febaa4d76b7719,PodSandboxId:6221ab3e632e79d5d9bc777c45be85aa4398f095df5d16085e097688153d9fc6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720467097497177949,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-cgkpr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61b3fef5-b549-4aab-a5f7-da35eb3d4477,},Annotations:map[string]string{io.kubernetes.container.hash: fd1fb148,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15069a7b3f50f8b733f6b841313e7a8a53493fde2473f0d6937d3d42cdb19b58,PodSandboxId:a475af66e07627f5d7be099005a460014744a7e5e962deff973069a4ddf3ee6b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720467088955878112,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-gtf45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 60da309c-ad4b-4388-aa45-131c4fb0f4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 12f75852,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c1dd586ce67e6238a1cfaffc3490bd72a604cdd37589b6fc143c48bbe669bb,PodSandboxId:df2322cd913fef3666747a84f57a3c7bbb976ea99bf9cfcb9d54992f63072298,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1720467061183925188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c4fls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4812f288-e0bc-4f79-9497-3c911d963eb1,},Annotations:map[string]string{io.kubernetes.container.hash: dbc35b48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648cabbd1c23d6e1cb4e2fe82a58559d0355fd3fc4814fb0faab0e47b04c08a6,PodSandboxId:521925d164457282c8ac32ca8935b1cf4e38efcc19423b481fdb6eca95348a6e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1720467061061876992,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7d749,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a592f04-8095-4e81-befe-9bb48c44e466,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac2730b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db2932e06f7be72942ef239e31d1031ce07694c0eb50c48426a91525fc5997b,PodSandboxId:7b106f06e44b15bc52775874e37735172477625277d973d9f8e510aa5a0f5007,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978f
bf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1720467058658406337,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rf6p2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3ac6741a-bec9-4f29-a6eb-c73c7500970b,},Annotations:map[string]string{io.kubernetes.container.hash: a6c15013,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15a419524494fe4cac639c22abd343bc586a2b8dacee4ba44e05b64a982534b,PodSandboxId:494772db18f3ff4a6eed10b94a087e898e932f0db0dd5abca014a0e933a95851,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1720467052855153288,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-nqm94,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 894303f6-0b3f-451e-8b4c-a1269b70c68f,},Annotations:map[string]string{io.kubernetes.container.hash: b37e3ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d,PodSandboxId:68cc01146add074afc7474a39a65cf3f67d5159accedf923d725dfb2b979aa44,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720467042432339060,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c6gzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5607f8-de0f-4bb1-b219-54ef33238b21,},Annotations:map[string]string{io.kubernetes.container.hash: 5e9677a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0486a262195e25e0cbcb85c7f856a35300a55c800deabb7b3cea1c342fb270,PodSandboxId:cab7dfbbaf216814b4579d7313dd71505a5
e81c4b09c6bf1abec9adf853bd02d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467014260979278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a22fea0-2e74-4b1d-8943-4009c3bae190,},Annotations:map[string]string{io.kubernetes.container.hash: 2544e9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa46f641fce3a59d88fe88837a4c8c08f7e4447206bc8e44e12b9f4f5079abef,PodSandboxId:33f84dfe9bc8103d8a4d8447c3cb88183ca9f280e28de2f9
2203373b2195c63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467010485771198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdmnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8790295-025f-492c-8527-b45580989758,},Annotations:map[string]string{io.kubernetes.container.hash: 933e8636,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fc1829105fd93b0c9eef5eaf11f30232d42efabb4cb4130c54a76a96ddbd82,PodSandboxId:36132fbfe93b031bbb4a7915d682454b32a82ec148af0e191ae9410b8818414d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467007796499322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7plgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcd9909-5fdf-4a54-a66c-12498b65c28f,},Annotations:map[string]string{io.kubernetes.container.hash: 683e9680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:9d49be99483f5c15756481dec1f198cbd8e9da87539ae5759ec447421c2bf138,PodSandboxId:717561d56daf2914143b08bb1f10bf41c455065ce54ea0b073b734843dc7684e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720466987871363903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c73b77c9e8c067af0478499956a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:1a92a99f73b4cef445e51d38c9c94905a53d179bb9954413a5a15d3c7b803b46,PodSandboxId:329650aaf1bc3112b2f246746ca0fdbb0bcf8fde6ea8df7451a3998bfc1a8642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720466987910144817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a0cfbd4519e6880ca99be18bf725eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:e35d37ebf78b3809e33fc570ccdc8fa7d7a0fd4dcb658545c70675d77960f080,PodSandboxId:ffe430fd6cb316055ec66677e7e183a3803757ae260eb7eb9ebc754295c738be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720466987873699629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf9d34116c191cb68773ad425a33b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9332ffa119798ff8821e289f7966df0d8310e8c1a67d1304c5ba54479752c901,PodSandboxId:3b2f473211f40a9fd72f56007b83165481808f94dd45efcb518930f575189497,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720466987808420964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4feb2225d826d58f607b166f558fd389,},Annotations:map[string]string{io.kubernetes.container.hash: 73820d47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c
f4ecff9-ab02-403e-bfbb-4c919c987ab8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.087705279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff3ab2ba-3ff3-46be-a134-9ee93f287edb name=/runtime.v1.RuntimeService/Version
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.087803562Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff3ab2ba-3ff3-46be-a134-9ee93f287edb name=/runtime.v1.RuntimeService/Version
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.089224188Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74b72104-5ce4-4b47-b3b3-0ac13218e610 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.090868075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720467256090838211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587821,},InodesUsed:&UInt64Value{Value:204,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74b72104-5ce4-4b47-b3b3-0ac13218e610 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.091538012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f71260d-282d-4cdf-95a0-0259aeb3a925 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.091612773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f71260d-282d-4cdf-95a0-0259aeb3a925 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.092136135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f04837ef2d2a591237579fa868e9a2aef2dd5b55f6ca0f9e4216d0f9a5a77cb,PodSandboxId:8b3b8135d419631ff3173aa315156556e130137c4f9028add8ea5b0254fe418a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720467247927762236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-lznqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db22bb68-894a-454b-a1d2-9410d39a9528,},Annotations:map[string]string{io.kubernetes.container.hash: 3b742db2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ac69521142a6e35cd53b6146de1f860720de3a3b9d912255bd3b66a9ef1aa9,PodSandboxId:81bb11f417f17a79ce947d7ce9f7acc952bd3a5e0a0ee55786cd608bca00bdc0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720467109169255473,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5771cdad-38eb-4b69-9d82-5a58ef2c2f4e,},Annotations:map[string]string{io.kubern
etes.container.hash: 92c93ea5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa89db705add5299b6512650662be261a1b54171a36defda0febaa4d76b7719,PodSandboxId:6221ab3e632e79d5d9bc777c45be85aa4398f095df5d16085e097688153d9fc6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720467097497177949,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-cgkpr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61b3fef5-b549-4aab-a5f7-da35eb3d4477,},Annotations:map[string]string{io.kubernetes.container.hash: fd1fb148,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15069a7b3f50f8b733f6b841313e7a8a53493fde2473f0d6937d3d42cdb19b58,PodSandboxId:a475af66e07627f5d7be099005a460014744a7e5e962deff973069a4ddf3ee6b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720467088955878112,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-gtf45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 60da309c-ad4b-4388-aa45-131c4fb0f4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 12f75852,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c1dd586ce67e6238a1cfaffc3490bd72a604cdd37589b6fc143c48bbe669bb,PodSandboxId:df2322cd913fef3666747a84f57a3c7bbb976ea99bf9cfcb9d54992f63072298,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1720467061183925188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c4fls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4812f288-e0bc-4f79-9497-3c911d963eb1,},Annotations:map[string]string{io.kubernetes.container.hash: dbc35b48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648cabbd1c23d6e1cb4e2fe82a58559d0355fd3fc4814fb0faab0e47b04c08a6,PodSandboxId:521925d164457282c8ac32ca8935b1cf4e38efcc19423b481fdb6eca95348a6e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1720467061061876992,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7d749,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a592f04-8095-4e81-befe-9bb48c44e466,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac2730b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db2932e06f7be72942ef239e31d1031ce07694c0eb50c48426a91525fc5997b,PodSandboxId:7b106f06e44b15bc52775874e37735172477625277d973d9f8e510aa5a0f5007,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978f
bf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1720467058658406337,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rf6p2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3ac6741a-bec9-4f29-a6eb-c73c7500970b,},Annotations:map[string]string{io.kubernetes.container.hash: a6c15013,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15a419524494fe4cac639c22abd343bc586a2b8dacee4ba44e05b64a982534b,PodSandboxId:494772db18f3ff4a6eed10b94a087e898e932f0db0dd5abca014a0e933a95851,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1720467052855153288,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-nqm94,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 894303f6-0b3f-451e-8b4c-a1269b70c68f,},Annotations:map[string]string{io.kubernetes.container.hash: b37e3ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d,PodSandboxId:68cc01146add074afc7474a39a65cf3f67d5159accedf923d725dfb2b979aa44,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720467042432339060,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c6gzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5607f8-de0f-4bb1-b219-54ef33238b21,},Annotations:map[string]string{io.kubernetes.container.hash: 5e9677a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0486a262195e25e0cbcb85c7f856a35300a55c800deabb7b3cea1c342fb270,PodSandboxId:cab7dfbbaf216814b4579d7313dd71505a5
e81c4b09c6bf1abec9adf853bd02d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467014260979278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a22fea0-2e74-4b1d-8943-4009c3bae190,},Annotations:map[string]string{io.kubernetes.container.hash: 2544e9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa46f641fce3a59d88fe88837a4c8c08f7e4447206bc8e44e12b9f4f5079abef,PodSandboxId:33f84dfe9bc8103d8a4d8447c3cb88183ca9f280e28de2f9
2203373b2195c63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467010485771198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdmnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8790295-025f-492c-8527-b45580989758,},Annotations:map[string]string{io.kubernetes.container.hash: 933e8636,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fc1829105fd93b0c9eef5eaf11f30232d42efabb4cb4130c54a76a96ddbd82,PodSandboxId:36132fbfe93b031bbb4a7915d682454b32a82ec148af0e191ae9410b8818414d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467007796499322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7plgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcd9909-5fdf-4a54-a66c-12498b65c28f,},Annotations:map[string]string{io.kubernetes.container.hash: 683e9680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:9d49be99483f5c15756481dec1f198cbd8e9da87539ae5759ec447421c2bf138,PodSandboxId:717561d56daf2914143b08bb1f10bf41c455065ce54ea0b073b734843dc7684e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720466987871363903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c73b77c9e8c067af0478499956a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:1a92a99f73b4cef445e51d38c9c94905a53d179bb9954413a5a15d3c7b803b46,PodSandboxId:329650aaf1bc3112b2f246746ca0fdbb0bcf8fde6ea8df7451a3998bfc1a8642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720466987910144817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a0cfbd4519e6880ca99be18bf725eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:e35d37ebf78b3809e33fc570ccdc8fa7d7a0fd4dcb658545c70675d77960f080,PodSandboxId:ffe430fd6cb316055ec66677e7e183a3803757ae260eb7eb9ebc754295c738be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720466987873699629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf9d34116c191cb68773ad425a33b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9332ffa119798ff8821e289f7966df0d8310e8c1a67d1304c5ba54479752c901,PodSandboxId:3b2f473211f40a9fd72f56007b83165481808f94dd45efcb518930f575189497,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720466987808420964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4feb2225d826d58f607b166f558fd389,},Annotations:map[string]string{io.kubernetes.container.hash: 73820d47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9
f71260d-282d-4cdf-95a0-0259aeb3a925 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.130246664Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d833eb96-94ca-4bd0-878c-d895a152d5d9 name=/runtime.v1.RuntimeService/Version
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.130340685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d833eb96-94ca-4bd0-878c-d895a152d5d9 name=/runtime.v1.RuntimeService/Version
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.132721909Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e836e0ee-fd5b-4aa9-bbad-6f746569c4d8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.134045529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720467256133965864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587821,},InodesUsed:&UInt64Value{Value:204,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e836e0ee-fd5b-4aa9-bbad-6f746569c4d8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.134957504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c0515a9-1591-4e9e-bffe-e63e0d63c5be name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.135073960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c0515a9-1591-4e9e-bffe-e63e0d63c5be name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:34:16 addons-268316 crio[685]: time="2024-07-08 19:34:16.135428224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f04837ef2d2a591237579fa868e9a2aef2dd5b55f6ca0f9e4216d0f9a5a77cb,PodSandboxId:8b3b8135d419631ff3173aa315156556e130137c4f9028add8ea5b0254fe418a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720467247927762236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-lznqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db22bb68-894a-454b-a1d2-9410d39a9528,},Annotations:map[string]string{io.kubernetes.container.hash: 3b742db2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ac69521142a6e35cd53b6146de1f860720de3a3b9d912255bd3b66a9ef1aa9,PodSandboxId:81bb11f417f17a79ce947d7ce9f7acc952bd3a5e0a0ee55786cd608bca00bdc0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720467109169255473,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5771cdad-38eb-4b69-9d82-5a58ef2c2f4e,},Annotations:map[string]string{io.kubern
etes.container.hash: 92c93ea5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa89db705add5299b6512650662be261a1b54171a36defda0febaa4d76b7719,PodSandboxId:6221ab3e632e79d5d9bc777c45be85aa4398f095df5d16085e097688153d9fc6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720467097497177949,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-cgkpr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61b3fef5-b549-4aab-a5f7-da35eb3d4477,},Annotations:map[string]string{io.kubernetes.container.hash: fd1fb148,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15069a7b3f50f8b733f6b841313e7a8a53493fde2473f0d6937d3d42cdb19b58,PodSandboxId:a475af66e07627f5d7be099005a460014744a7e5e962deff973069a4ddf3ee6b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720467088955878112,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-gtf45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 60da309c-ad4b-4388-aa45-131c4fb0f4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 12f75852,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c1dd586ce67e6238a1cfaffc3490bd72a604cdd37589b6fc143c48bbe669bb,PodSandboxId:df2322cd913fef3666747a84f57a3c7bbb976ea99bf9cfcb9d54992f63072298,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1720467061183925188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c4fls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4812f288-e0bc-4f79-9497-3c911d963eb1,},Annotations:map[string]string{io.kubernetes.container.hash: dbc35b48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:648cabbd1c23d6e1cb4e2fe82a58559d0355fd3fc4814fb0faab0e47b04c08a6,PodSandboxId:521925d164457282c8ac32ca8935b1cf4e38efcc19423b481fdb6eca95348a6e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1720467061061876992,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7d749,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a592f04-8095-4e81-befe-9bb48c44e466,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac2730b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db2932e06f7be72942ef239e31d1031ce07694c0eb50c48426a91525fc5997b,PodSandboxId:7b106f06e44b15bc52775874e37735172477625277d973d9f8e510aa5a0f5007,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978f
bf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1720467058658406337,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rf6p2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3ac6741a-bec9-4f29-a6eb-c73c7500970b,},Annotations:map[string]string{io.kubernetes.container.hash: a6c15013,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15a419524494fe4cac639c22abd343bc586a2b8dacee4ba44e05b64a982534b,PodSandboxId:494772db18f3ff4a6eed10b94a087e898e932f0db0dd5abca014a0e933a95851,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1720467052855153288,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-nqm94,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 894303f6-0b3f-451e-8b4c-a1269b70c68f,},Annotations:map[string]string{io.kubernetes.container.hash: b37e3ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d,PodSandboxId:68cc01146add074afc7474a39a65cf3f67d5159accedf923d725dfb2b979aa44,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720467042432339060,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c6gzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5607f8-de0f-4bb1-b219-54ef33238b21,},Annotations:map[string]string{io.kubernetes.container.hash: 5e9677a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0486a262195e25e0cbcb85c7f856a35300a55c800deabb7b3cea1c342fb270,PodSandboxId:cab7dfbbaf216814b4579d7313dd71505a5
e81c4b09c6bf1abec9adf853bd02d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467014260979278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a22fea0-2e74-4b1d-8943-4009c3bae190,},Annotations:map[string]string{io.kubernetes.container.hash: 2544e9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa46f641fce3a59d88fe88837a4c8c08f7e4447206bc8e44e12b9f4f5079abef,PodSandboxId:33f84dfe9bc8103d8a4d8447c3cb88183ca9f280e28de2f9
2203373b2195c63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467010485771198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdmnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8790295-025f-492c-8527-b45580989758,},Annotations:map[string]string{io.kubernetes.container.hash: 933e8636,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fc1829105fd93b0c9eef5eaf11f30232d42efabb4cb4130c54a76a96ddbd82,PodSandboxId:36132fbfe93b031bbb4a7915d682454b32a82ec148af0e191ae9410b8818414d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467007796499322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7plgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcd9909-5fdf-4a54-a66c-12498b65c28f,},Annotations:map[string]string{io.kubernetes.container.hash: 683e9680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:9d49be99483f5c15756481dec1f198cbd8e9da87539ae5759ec447421c2bf138,PodSandboxId:717561d56daf2914143b08bb1f10bf41c455065ce54ea0b073b734843dc7684e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720466987871363903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c73b77c9e8c067af0478499956a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:1a92a99f73b4cef445e51d38c9c94905a53d179bb9954413a5a15d3c7b803b46,PodSandboxId:329650aaf1bc3112b2f246746ca0fdbb0bcf8fde6ea8df7451a3998bfc1a8642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720466987910144817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a0cfbd4519e6880ca99be18bf725eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:e35d37ebf78b3809e33fc570ccdc8fa7d7a0fd4dcb658545c70675d77960f080,PodSandboxId:ffe430fd6cb316055ec66677e7e183a3803757ae260eb7eb9ebc754295c738be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720466987873699629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf9d34116c191cb68773ad425a33b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9332ffa119798ff8821e289f7966df0d8310e8c1a67d1304c5ba54479752c901,PodSandboxId:3b2f473211f40a9fd72f56007b83165481808f94dd45efcb518930f575189497,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720466987808420964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4feb2225d826d58f607b166f558fd389,},Annotations:map[string]string{io.kubernetes.container.hash: 73820d47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5
c0515a9-1591-4e9e-bffe-e63e0d63c5be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f04837ef2d2a       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   8b3b8135d4196       hello-world-app-86c47465fc-lznqj
	35ac69521142a       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   81bb11f417f17       nginx
	8fa89db705add       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   6221ab3e632e7       headlamp-7867546754-cgkpr
	15069a7b3f50f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago       Running             gcp-auth                  0                   a475af66e0762       gcp-auth-5db96cd9b4-gtf45
	35c1dd586ce67       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   df2322cd913fe       ingress-nginx-admission-patch-c4fls
	648cabbd1c23d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   521925d164457       ingress-nginx-admission-create-7d749
	1db2932e06f7b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              3 minutes ago       Running             yakd                      0                   7b106f06e44b1       yakd-dashboard-799879c74f-rf6p2
	a15a419524494       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   494772db18f3f       local-path-provisioner-8d985888d-nqm94
	15a517fd0d065       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   68cc01146add0       metrics-server-c59844bb4-c6gzl
	0e0486a262195       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   cab7dfbbaf216       storage-provisioner
	aa46f641fce3a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   33f84dfe9bc81       coredns-7db6d8ff4d-mdmnx
	49fc1829105fd       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                             4 minutes ago       Running             kube-proxy                0                   36132fbfe93b0       kube-proxy-7plgc
	1a92a99f73b4c       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                             4 minutes ago       Running             kube-apiserver            0                   329650aaf1bc3       kube-apiserver-addons-268316
	e35d37ebf78b3       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                             4 minutes ago       Running             kube-controller-manager   0                   ffe430fd6cb31       kube-controller-manager-addons-268316
	9d49be99483f5       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                             4 minutes ago       Running             kube-scheduler            0                   717561d56daf2       kube-scheduler-addons-268316
	9332ffa119798       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   3b2f473211f40       etcd-addons-268316
	
	
	==> coredns [aa46f641fce3a59d88fe88837a4c8c08f7e4447206bc8e44e12b9f4f5079abef] <==
	[INFO] 10.244.0.8:57008 - 20929 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000135063s
	[INFO] 10.244.0.8:35379 - 49666 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000136303s
	[INFO] 10.244.0.8:35379 - 13313 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000063072s
	[INFO] 10.244.0.8:44352 - 48968 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081466s
	[INFO] 10.244.0.8:44352 - 10574 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102635s
	[INFO] 10.244.0.8:47113 - 57758 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110145s
	[INFO] 10.244.0.8:47113 - 62864 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006031s
	[INFO] 10.244.0.8:46632 - 8198 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097314s
	[INFO] 10.244.0.8:46632 - 64773 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000059706s
	[INFO] 10.244.0.8:57492 - 57986 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055222s
	[INFO] 10.244.0.8:57492 - 50308 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070743s
	[INFO] 10.244.0.8:58094 - 3548 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048957s
	[INFO] 10.244.0.8:58094 - 29150 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083368s
	[INFO] 10.244.0.8:35355 - 6749 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000052974s
	[INFO] 10.244.0.8:35355 - 34399 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156485s
	[INFO] 10.244.0.22:43752 - 4695 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000496387s
	[INFO] 10.244.0.22:57415 - 5936 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000076941s
	[INFO] 10.244.0.22:54805 - 5906 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103352s
	[INFO] 10.244.0.22:34679 - 17723 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079246s
	[INFO] 10.244.0.22:44111 - 3213 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105983s
	[INFO] 10.244.0.22:42665 - 34122 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063248s
	[INFO] 10.244.0.22:38716 - 2315 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000731728s
	[INFO] 10.244.0.22:54782 - 52403 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.00044554s
	[INFO] 10.244.0.25:55033 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000236273s
	[INFO] 10.244.0.25:36867 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116024s
	
	
	==> describe nodes <==
	Name:               addons-268316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-268316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=addons-268316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T19_29_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-268316
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:29:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-268316
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:34:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:32:26 +0000   Mon, 08 Jul 2024 19:29:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:32:26 +0000   Mon, 08 Jul 2024 19:29:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:32:26 +0000   Mon, 08 Jul 2024 19:29:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:32:26 +0000   Mon, 08 Jul 2024 19:29:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    addons-268316
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 12ddb492c0af4611b4c2501c2b7881af
	  System UUID:                12ddb492-c0af-4611-b4c2-501c2b7881af
	  Boot ID:                    8b0b105f-947c-4a97-ba70-9386535e08a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-lznqj          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-5db96cd9b4-gtf45                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  headlamp                    headlamp-7867546754-cgkpr                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-7db6d8ff4d-mdmnx                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m10s
	  kube-system                 etcd-addons-268316                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-apiserver-addons-268316              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-controller-manager-addons-268316     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-proxy-7plgc                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-scheduler-addons-268316              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 metrics-server-c59844bb4-c6gzl            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m4s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  local-path-storage          local-path-provisioner-8d985888d-nqm94    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  yakd-dashboard              yakd-dashboard-799879c74f-rf6p2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m7s   kube-proxy       
	  Normal  Starting                 4m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m24s  kubelet          Node addons-268316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s  kubelet          Node addons-268316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s  kubelet          Node addons-268316 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m22s  kubelet          Node addons-268316 status is now: NodeReady
	  Normal  RegisteredNode           4m11s  node-controller  Node addons-268316 event: Registered Node addons-268316 in Controller
	
	
	==> dmesg <==
	[  +0.087540] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.287343] kauditd_printk_skb: 18 callbacks suppressed
	[Jul 8 19:30] systemd-fstab-generator[1486]: Ignoring "noauto" option for root device
	[  +5.169700] kauditd_printk_skb: 103 callbacks suppressed
	[  +5.035103] kauditd_printk_skb: 125 callbacks suppressed
	[  +8.695411] kauditd_printk_skb: 98 callbacks suppressed
	[ +17.090744] kauditd_printk_skb: 8 callbacks suppressed
	[ +10.289816] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.945770] kauditd_printk_skb: 9 callbacks suppressed
	[Jul 8 19:31] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.238576] kauditd_printk_skb: 52 callbacks suppressed
	[  +6.038021] kauditd_printk_skb: 24 callbacks suppressed
	[ +10.547510] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.448399] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.294649] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.218769] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.587815] kauditd_printk_skb: 39 callbacks suppressed
	[Jul 8 19:32] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.799676] kauditd_printk_skb: 29 callbacks suppressed
	[ +14.704295] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.557562] kauditd_printk_skb: 7 callbacks suppressed
	[ +23.123568] kauditd_printk_skb: 7 callbacks suppressed
	[Jul 8 19:33] kauditd_printk_skb: 33 callbacks suppressed
	[Jul 8 19:34] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.897966] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9332ffa119798ff8821e289f7966df0d8310e8c1a67d1304c5ba54479752c901] <==
	{"level":"info","ts":"2024-07-08T19:31:14.981709Z","caller":"traceutil/trace.go:171","msg":"trace[1421868555] linearizableReadLoop","detail":"{readStateIndex:1116; appliedIndex:1115; }","duration":"340.425348ms","start":"2024-07-08T19:31:14.641269Z","end":"2024-07-08T19:31:14.981695Z","steps":["trace[1421868555] 'read index received'  (duration: 340.253266ms)","trace[1421868555] 'applied index is now lower than readState.Index'  (duration: 171.626µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T19:31:14.981883Z","caller":"traceutil/trace.go:171","msg":"trace[2046605009] transaction","detail":"{read_only:false; response_revision:1086; number_of_response:1; }","duration":"435.496297ms","start":"2024-07-08T19:31:14.546378Z","end":"2024-07-08T19:31:14.981875Z","steps":["trace[2046605009] 'process raft request'  (duration: 435.183712ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:14.981959Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-08T19:31:14.546361Z","time spent":"435.539697ms","remote":"127.0.0.1:33070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-268316\" mod_revision:1011 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-268316\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-268316\" > >"}
	{"level":"warn","ts":"2024-07-08T19:31:14.982048Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.795744ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-07-08T19:31:14.982092Z","caller":"traceutil/trace.go:171","msg":"trace[718086904] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1086; }","duration":"232.919802ms","start":"2024-07-08T19:31:14.749161Z","end":"2024-07-08T19:31:14.982081Z","steps":["trace[718086904] 'agreement among raft nodes before linearized reading'  (duration: 232.758903ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:14.982202Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.932826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-08T19:31:14.982224Z","caller":"traceutil/trace.go:171","msg":"trace[284475579] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1086; }","duration":"340.977006ms","start":"2024-07-08T19:31:14.641242Z","end":"2024-07-08T19:31:14.982219Z","steps":["trace[284475579] 'agreement among raft nodes before linearized reading'  (duration: 340.905087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:14.982237Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-08T19:31:14.641228Z","time spent":"341.004414ms","remote":"127.0.0.1:32972","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11475,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-08T19:31:14.982311Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.668274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-08T19:31:14.982334Z","caller":"traceutil/trace.go:171","msg":"trace[2023217837] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1086; }","duration":"113.71307ms","start":"2024-07-08T19:31:14.868614Z","end":"2024-07-08T19:31:14.982327Z","steps":["trace[2023217837] 'agreement among raft nodes before linearized reading'  (duration: 113.634042ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:14.982435Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.214792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85554"}
	{"level":"info","ts":"2024-07-08T19:31:14.982453Z","caller":"traceutil/trace.go:171","msg":"trace[1292665772] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1086; }","duration":"230.255755ms","start":"2024-07-08T19:31:14.752191Z","end":"2024-07-08T19:31:14.982447Z","steps":["trace[1292665772] 'agreement among raft nodes before linearized reading'  (duration: 230.125276ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:31:25.323397Z","caller":"traceutil/trace.go:171","msg":"trace[352211193] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"309.119307ms","start":"2024-07-08T19:31:25.014261Z","end":"2024-07-08T19:31:25.323381Z","steps":["trace[352211193] 'process raft request'  (duration: 309.020066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:25.323637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.621624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-08T19:31:25.323397Z","caller":"traceutil/trace.go:171","msg":"trace[1487299839] linearizableReadLoop","detail":"{readStateIndex:1143; appliedIndex:1143; }","duration":"224.525358ms","start":"2024-07-08T19:31:25.098854Z","end":"2024-07-08T19:31:25.323379Z","steps":["trace[1487299839] 'read index received'  (duration: 224.518818ms)","trace[1487299839] 'applied index is now lower than readState.Index'  (duration: 5.684µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T19:31:25.3237Z","caller":"traceutil/trace.go:171","msg":"trace[199844631] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:1111; }","duration":"224.884035ms","start":"2024-07-08T19:31:25.098808Z","end":"2024-07-08T19:31:25.323692Z","steps":["trace[199844631] 'agreement among raft nodes before linearized reading'  (duration: 224.593635ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:25.323665Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-08T19:31:25.014246Z","time spent":"309.362174ms","remote":"127.0.0.1:33070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1102 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-07-08T19:31:25.381409Z","caller":"traceutil/trace.go:171","msg":"trace[1021177162] transaction","detail":"{read_only:false; response_revision:1112; number_of_response:1; }","duration":"176.682468ms","start":"2024-07-08T19:31:25.204704Z","end":"2024-07-08T19:31:25.381387Z","steps":["trace[1021177162] 'process raft request'  (duration: 174.153029ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:25.383704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.04023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-08T19:31:25.383843Z","caller":"traceutil/trace.go:171","msg":"trace[306522552] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1112; }","duration":"196.910025ms","start":"2024-07-08T19:31:25.186918Z","end":"2024-07-08T19:31:25.383828Z","steps":["trace[306522552] 'agreement among raft nodes before linearized reading'  (duration: 195.046097ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:25.384606Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.374772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-08T19:31:25.38473Z","caller":"traceutil/trace.go:171","msg":"trace[857643364] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1112; }","duration":"244.529124ms","start":"2024-07-08T19:31:25.140192Z","end":"2024-07-08T19:31:25.384721Z","steps":["trace[857643364] 'agreement among raft nodes before linearized reading'  (duration: 244.327416ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:32:21.399196Z","caller":"traceutil/trace.go:171","msg":"trace[454135626] transaction","detail":"{read_only:false; response_revision:1478; number_of_response:1; }","duration":"280.584913ms","start":"2024-07-08T19:32:21.118586Z","end":"2024-07-08T19:32:21.399171Z","steps":["trace[454135626] 'process raft request'  (duration: 280.166763ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:32:26.547334Z","caller":"traceutil/trace.go:171","msg":"trace[307156087] transaction","detail":"{read_only:false; response_revision:1504; number_of_response:1; }","duration":"131.109067ms","start":"2024-07-08T19:32:26.416197Z","end":"2024-07-08T19:32:26.547306Z","steps":["trace[307156087] 'process raft request'  (duration: 130.722018ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:32:51.969815Z","caller":"traceutil/trace.go:171","msg":"trace[1658838365] transaction","detail":"{read_only:false; response_revision:1588; number_of_response:1; }","duration":"124.393838ms","start":"2024-07-08T19:32:51.845394Z","end":"2024-07-08T19:32:51.969788Z","steps":["trace[1658838365] 'process raft request'  (duration: 124.272508ms)"],"step_count":1}
	
	
	==> gcp-auth [15069a7b3f50f8b733f6b841313e7a8a53493fde2473f0d6937d3d42cdb19b58] <==
	2024/07/08 19:31:29 GCP Auth Webhook started!
	2024/07/08 19:31:30 Ready to marshal response ...
	2024/07/08 19:31:30 Ready to write response ...
	2024/07/08 19:31:30 Ready to marshal response ...
	2024/07/08 19:31:30 Ready to write response ...
	2024/07/08 19:31:30 Ready to marshal response ...
	2024/07/08 19:31:30 Ready to write response ...
	2024/07/08 19:31:34 Ready to marshal response ...
	2024/07/08 19:31:34 Ready to write response ...
	2024/07/08 19:31:40 Ready to marshal response ...
	2024/07/08 19:31:40 Ready to write response ...
	2024/07/08 19:31:46 Ready to marshal response ...
	2024/07/08 19:31:46 Ready to write response ...
	2024/07/08 19:31:54 Ready to marshal response ...
	2024/07/08 19:31:54 Ready to write response ...
	2024/07/08 19:31:55 Ready to marshal response ...
	2024/07/08 19:31:55 Ready to write response ...
	2024/07/08 19:32:04 Ready to marshal response ...
	2024/07/08 19:32:04 Ready to write response ...
	2024/07/08 19:32:14 Ready to marshal response ...
	2024/07/08 19:32:14 Ready to write response ...
	2024/07/08 19:32:44 Ready to marshal response ...
	2024/07/08 19:32:44 Ready to write response ...
	2024/07/08 19:34:05 Ready to marshal response ...
	2024/07/08 19:34:05 Ready to write response ...
	
	
	==> kernel <==
	 19:34:16 up 4 min,  0 users,  load average: 0.83, 1.18, 0.61
	Linux addons-268316 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1a92a99f73b4cef445e51d38c9c94905a53d179bb9954413a5a15d3c7b803b46] <==
	W0708 19:31:43.721726       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 19:31:43.722367       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0708 19:31:43.722563       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.226.252:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.226.252:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.226.252:443: connect: connection refused
	E0708 19:31:43.727439       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.226.252:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.226.252:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.226.252:443: connect: connection refused
	I0708 19:31:43.793162       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0708 19:31:45.999954       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0708 19:31:46.222428       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.241.186"}
	I0708 19:31:49.407659       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0708 19:31:50.436686       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0708 19:32:28.134560       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0708 19:33:00.747388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0708 19:33:00.752313       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0708 19:33:00.784661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0708 19:33:00.784727       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0708 19:33:00.801635       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0708 19:33:00.801768       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0708 19:33:00.831472       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0708 19:33:00.832347       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0708 19:33:00.875255       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0708 19:33:00.875816       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0708 19:33:01.801878       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0708 19:33:01.875961       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0708 19:33:01.887224       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0708 19:34:05.370378       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.111.187"}
	
	
	==> kube-controller-manager [e35d37ebf78b3809e33fc570ccdc8fa7d7a0fd4dcb658545c70675d77960f080] <==
	W0708 19:33:12.204723       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:33:12.204820       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:33:20.213042       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:33:20.213100       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:33:20.852209       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:33:20.852300       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:33:23.782472       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:33:23.782595       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:33:35.686639       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:33:35.686748       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:33:36.696343       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:33:36.696400       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:33:44.019639       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:33:44.019690       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:33:47.915724       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:33:47.915879       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0708 19:34:05.214040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="46.827535ms"
	I0708 19:34:05.230940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="16.8435ms"
	I0708 19:34:05.234796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="1.799234ms"
	I0708 19:34:05.244802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="32.278µs"
	I0708 19:34:08.172759       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0708 19:34:08.187803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="6.952µs"
	I0708 19:34:08.187875       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0708 19:34:08.279258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="26.679439ms"
	I0708 19:34:08.279318       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="35.994µs"
	
	
	==> kube-proxy [49fc1829105fd93b0c9eef5eaf11f30232d42efabb4cb4130c54a76a96ddbd82] <==
	I0708 19:30:08.969042       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:30:08.994726       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.231"]
	I0708 19:30:09.087327       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:30:09.087383       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:30:09.087400       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:30:09.090362       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:30:09.090566       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:30:09.090601       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:30:09.092375       1 config.go:192] "Starting service config controller"
	I0708 19:30:09.092384       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:30:09.092406       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:30:09.092409       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:30:09.092937       1 config.go:319] "Starting node config controller"
	I0708 19:30:09.092944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:30:09.192624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:30:09.192661       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:30:09.193345       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9d49be99483f5c15756481dec1f198cbd8e9da87539ae5759ec447421c2bf138] <==
	W0708 19:29:50.557423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 19:29:50.563974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 19:29:50.557606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 19:29:50.564112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 19:29:51.370285       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 19:29:51.370382       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 19:29:51.388758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 19:29:51.388856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 19:29:51.417972       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:29:51.418070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 19:29:51.420504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 19:29:51.420525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 19:29:51.437075       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 19:29:51.437182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0708 19:29:51.446590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 19:29:51.446617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 19:29:51.474313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 19:29:51.474377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 19:29:51.477182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0708 19:29:51.477232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0708 19:29:51.488557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 19:29:51.488603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 19:29:51.626724       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 19:29:51.626809       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0708 19:29:53.841586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 19:34:05 addons-268316 kubelet[1276]: I0708 19:34:05.222152    1276 memory_manager.go:354] "RemoveStaleState removing state" podUID="26bd046a-4a16-4a94-aa7e-09f3b7b7c6c9" containerName="csi-snapshotter"
	Jul 08 19:34:05 addons-268316 kubelet[1276]: I0708 19:34:05.222186    1276 memory_manager.go:354] "RemoveStaleState removing state" podUID="26bd046a-4a16-4a94-aa7e-09f3b7b7c6c9" containerName="liveness-probe"
	Jul 08 19:34:05 addons-268316 kubelet[1276]: I0708 19:34:05.338914    1276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w8m4\" (UniqueName: \"kubernetes.io/projected/db22bb68-894a-454b-a1d2-9410d39a9528-kube-api-access-9w8m4\") pod \"hello-world-app-86c47465fc-lznqj\" (UID: \"db22bb68-894a-454b-a1d2-9410d39a9528\") " pod="default/hello-world-app-86c47465fc-lznqj"
	Jul 08 19:34:05 addons-268316 kubelet[1276]: I0708 19:34:05.339208    1276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/db22bb68-894a-454b-a1d2-9410d39a9528-gcp-creds\") pod \"hello-world-app-86c47465fc-lznqj\" (UID: \"db22bb68-894a-454b-a1d2-9410d39a9528\") " pod="default/hello-world-app-86c47465fc-lznqj"
	Jul 08 19:34:06 addons-268316 kubelet[1276]: I0708 19:34:06.851638    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrkbt\" (UniqueName: \"kubernetes.io/projected/f5f48486-6578-4b7c-ab34-56de96be0694-kube-api-access-zrkbt\") pod \"f5f48486-6578-4b7c-ab34-56de96be0694\" (UID: \"f5f48486-6578-4b7c-ab34-56de96be0694\") "
	Jul 08 19:34:06 addons-268316 kubelet[1276]: I0708 19:34:06.854203    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f48486-6578-4b7c-ab34-56de96be0694-kube-api-access-zrkbt" (OuterVolumeSpecName: "kube-api-access-zrkbt") pod "f5f48486-6578-4b7c-ab34-56de96be0694" (UID: "f5f48486-6578-4b7c-ab34-56de96be0694"). InnerVolumeSpecName "kube-api-access-zrkbt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 08 19:34:06 addons-268316 kubelet[1276]: I0708 19:34:06.952617    1276 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zrkbt\" (UniqueName: \"kubernetes.io/projected/f5f48486-6578-4b7c-ab34-56de96be0694-kube-api-access-zrkbt\") on node \"addons-268316\" DevicePath \"\""
	Jul 08 19:34:07 addons-268316 kubelet[1276]: I0708 19:34:07.197814    1276 scope.go:117] "RemoveContainer" containerID="6d15b5e6db1e07727eabc12d5e9fae93d2a14b0f50cea418d977648e8ff08c04"
	Jul 08 19:34:07 addons-268316 kubelet[1276]: I0708 19:34:07.294156    1276 scope.go:117] "RemoveContainer" containerID="6d15b5e6db1e07727eabc12d5e9fae93d2a14b0f50cea418d977648e8ff08c04"
	Jul 08 19:34:07 addons-268316 kubelet[1276]: E0708 19:34:07.294881    1276 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d15b5e6db1e07727eabc12d5e9fae93d2a14b0f50cea418d977648e8ff08c04\": container with ID starting with 6d15b5e6db1e07727eabc12d5e9fae93d2a14b0f50cea418d977648e8ff08c04 not found: ID does not exist" containerID="6d15b5e6db1e07727eabc12d5e9fae93d2a14b0f50cea418d977648e8ff08c04"
	Jul 08 19:34:07 addons-268316 kubelet[1276]: I0708 19:34:07.294915    1276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d15b5e6db1e07727eabc12d5e9fae93d2a14b0f50cea418d977648e8ff08c04"} err="failed to get container status \"6d15b5e6db1e07727eabc12d5e9fae93d2a14b0f50cea418d977648e8ff08c04\": rpc error: code = NotFound desc = could not find container \"6d15b5e6db1e07727eabc12d5e9fae93d2a14b0f50cea418d977648e8ff08c04\": container with ID starting with 6d15b5e6db1e07727eabc12d5e9fae93d2a14b0f50cea418d977648e8ff08c04 not found: ID does not exist"
	Jul 08 19:34:08 addons-268316 kubelet[1276]: I0708 19:34:08.918178    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4812f288-e0bc-4f79-9497-3c911d963eb1" path="/var/lib/kubelet/pods/4812f288-e0bc-4f79-9497-3c911d963eb1/volumes"
	Jul 08 19:34:08 addons-268316 kubelet[1276]: I0708 19:34:08.918639    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a592f04-8095-4e81-befe-9bb48c44e466" path="/var/lib/kubelet/pods/7a592f04-8095-4e81-befe-9bb48c44e466/volumes"
	Jul 08 19:34:08 addons-268316 kubelet[1276]: I0708 19:34:08.919281    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5f48486-6578-4b7c-ab34-56de96be0694" path="/var/lib/kubelet/pods/f5f48486-6578-4b7c-ab34-56de96be0694/volumes"
	Jul 08 19:34:11 addons-268316 kubelet[1276]: I0708 19:34:11.488178    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjbk6\" (UniqueName: \"kubernetes.io/projected/8fd42e6f-0a23-47d3-a8f9-9689b77fd215-kube-api-access-qjbk6\") pod \"8fd42e6f-0a23-47d3-a8f9-9689b77fd215\" (UID: \"8fd42e6f-0a23-47d3-a8f9-9689b77fd215\") "
	Jul 08 19:34:11 addons-268316 kubelet[1276]: I0708 19:34:11.488228    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fd42e6f-0a23-47d3-a8f9-9689b77fd215-webhook-cert\") pod \"8fd42e6f-0a23-47d3-a8f9-9689b77fd215\" (UID: \"8fd42e6f-0a23-47d3-a8f9-9689b77fd215\") "
	Jul 08 19:34:11 addons-268316 kubelet[1276]: I0708 19:34:11.490776    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd42e6f-0a23-47d3-a8f9-9689b77fd215-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "8fd42e6f-0a23-47d3-a8f9-9689b77fd215" (UID: "8fd42e6f-0a23-47d3-a8f9-9689b77fd215"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 08 19:34:11 addons-268316 kubelet[1276]: I0708 19:34:11.495099    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd42e6f-0a23-47d3-a8f9-9689b77fd215-kube-api-access-qjbk6" (OuterVolumeSpecName: "kube-api-access-qjbk6") pod "8fd42e6f-0a23-47d3-a8f9-9689b77fd215" (UID: "8fd42e6f-0a23-47d3-a8f9-9689b77fd215"). InnerVolumeSpecName "kube-api-access-qjbk6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 08 19:34:11 addons-268316 kubelet[1276]: I0708 19:34:11.589526    1276 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8fd42e6f-0a23-47d3-a8f9-9689b77fd215-webhook-cert\") on node \"addons-268316\" DevicePath \"\""
	Jul 08 19:34:11 addons-268316 kubelet[1276]: I0708 19:34:11.589579    1276 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qjbk6\" (UniqueName: \"kubernetes.io/projected/8fd42e6f-0a23-47d3-a8f9-9689b77fd215-kube-api-access-qjbk6\") on node \"addons-268316\" DevicePath \"\""
	Jul 08 19:34:12 addons-268316 kubelet[1276]: I0708 19:34:12.242888    1276 scope.go:117] "RemoveContainer" containerID="28336a9b4fd5b05fe1ea026377bedff654f10b7212f0827e6ca9c18fe3655a88"
	Jul 08 19:34:12 addons-268316 kubelet[1276]: I0708 19:34:12.260645    1276 scope.go:117] "RemoveContainer" containerID="28336a9b4fd5b05fe1ea026377bedff654f10b7212f0827e6ca9c18fe3655a88"
	Jul 08 19:34:12 addons-268316 kubelet[1276]: E0708 19:34:12.261309    1276 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28336a9b4fd5b05fe1ea026377bedff654f10b7212f0827e6ca9c18fe3655a88\": container with ID starting with 28336a9b4fd5b05fe1ea026377bedff654f10b7212f0827e6ca9c18fe3655a88 not found: ID does not exist" containerID="28336a9b4fd5b05fe1ea026377bedff654f10b7212f0827e6ca9c18fe3655a88"
	Jul 08 19:34:12 addons-268316 kubelet[1276]: I0708 19:34:12.261339    1276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28336a9b4fd5b05fe1ea026377bedff654f10b7212f0827e6ca9c18fe3655a88"} err="failed to get container status \"28336a9b4fd5b05fe1ea026377bedff654f10b7212f0827e6ca9c18fe3655a88\": rpc error: code = NotFound desc = could not find container \"28336a9b4fd5b05fe1ea026377bedff654f10b7212f0827e6ca9c18fe3655a88\": container with ID starting with 28336a9b4fd5b05fe1ea026377bedff654f10b7212f0827e6ca9c18fe3655a88 not found: ID does not exist"
	Jul 08 19:34:12 addons-268316 kubelet[1276]: I0708 19:34:12.918567    1276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fd42e6f-0a23-47d3-a8f9-9689b77fd215" path="/var/lib/kubelet/pods/8fd42e6f-0a23-47d3-a8f9-9689b77fd215/volumes"
	
	
	==> storage-provisioner [0e0486a262195e25e0cbcb85c7f856a35300a55c800deabb7b3cea1c342fb270] <==
	I0708 19:30:15.131453       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 19:30:15.233564       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 19:30:15.233627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 19:30:15.258347       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 19:30:15.258517       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-268316_4eba19e1-2747-409b-8c55-d9f213142986!
	I0708 19:30:15.258578       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e6a6c31-07a9-4ff0-9bf6-9b1e82c6f6b4", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-268316_4eba19e1-2747-409b-8c55-d9f213142986 became leader
	I0708 19:30:15.361855       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-268316_4eba19e1-2747-409b-8c55-d9f213142986!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-268316 -n addons-268316
helpers_test.go:261: (dbg) Run:  kubectl --context addons-268316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.61s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (348.05s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.868937ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-c6gzl" [fa5607f8-de0f-4bb1-b219-54ef33238b21] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
helpers_test.go:344: "metrics-server-c59844bb4-c6gzl" [fa5607f8-de0f-4bb1-b219-54ef33238b21] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005279308s
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (68.343809ms)

                                                
                                                
** stderr ** 
	error: metrics not available yet

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (73.639667ms)

                                                
                                                
** stderr ** 
	error: metrics not available yet

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (88.658758ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-268316, age: 2m1.552336969s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (67.059919ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-268316, age: 2m10.798120594s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (68.198723ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdmnx, age: 2m3.555251426s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (65.603437ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdmnx, age: 2m26.306666856s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (64.80589ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdmnx, age: 2m52.03982071s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (63.592194ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdmnx, age: 3m33.166897869s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (67.313673ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdmnx, age: 3m59.813222813s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (68.396771ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdmnx, age: 5m11.958656693s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (67.773008ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdmnx, age: 6m9.129950202s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-268316 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-268316 top pods -n kube-system: exit status 1 (63.883435ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mdmnx, age: 7m20.914101827s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-268316 -n addons-268316
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-268316 logs -n 25: (1.404646865s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| delete  | -p download-only-972529                                                                     | download-only-972529 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| delete  | -p download-only-548391                                                                     | download-only-548391 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| delete  | -p download-only-972529                                                                     | download-only-972529 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-230858 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC |                     |
	|         | binary-mirror-230858                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39545                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-230858                                                                     | binary-mirror-230858 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC |                     |
	|         | addons-268316                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC |                     |
	|         | addons-268316                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-268316 --wait=true                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC | 08 Jul 24 19:31 UTC |
	|         | -p addons-268316                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-268316 addons disable                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC | 08 Jul 24 19:31 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-268316 ip                                                                            | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC | 08 Jul 24 19:31 UTC |
	| addons  | addons-268316 addons disable                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC | 08 Jul 24 19:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC | 08 Jul 24 19:31 UTC |
	|         | addons-268316                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-268316 ssh curl -s                                                                   | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-268316 ssh cat                                                                       | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:32 UTC | 08 Jul 24 19:32 UTC |
	|         | /opt/local-path-provisioner/pvc-fe0dcfdc-b3e9-41ce-a1cc-00fdfd88c367_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-268316 addons disable                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:32 UTC | 08 Jul 24 19:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:32 UTC | 08 Jul 24 19:32 UTC |
	|         | -p addons-268316                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:32 UTC | 08 Jul 24 19:32 UTC |
	|         | addons-268316                                                                               |                      |         |         |                     |                     |
	| addons  | addons-268316 addons                                                                        | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:32 UTC | 08 Jul 24 19:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-268316 addons                                                                        | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:33 UTC | 08 Jul 24 19:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-268316 ip                                                                            | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:34 UTC | 08 Jul 24 19:34 UTC |
	| addons  | addons-268316 addons disable                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:34 UTC | 08 Jul 24 19:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-268316 addons disable                                                                | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:34 UTC | 08 Jul 24 19:34 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-268316 addons                                                                        | addons-268316        | jenkins | v1.33.1 | 08 Jul 24 19:37 UTC | 08 Jul 24 19:37 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 19:29:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 19:29:12.804120   13764 out.go:291] Setting OutFile to fd 1 ...
	I0708 19:29:12.804225   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:29:12.804234   13764 out.go:304] Setting ErrFile to fd 2...
	I0708 19:29:12.804238   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:29:12.804419   13764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 19:29:12.805003   13764 out.go:298] Setting JSON to false
	I0708 19:29:12.805783   13764 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":702,"bootTime":1720466251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 19:29:12.805840   13764 start.go:139] virtualization: kvm guest
	I0708 19:29:12.808052   13764 out.go:177] * [addons-268316] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 19:29:12.809555   13764 notify.go:220] Checking for updates...
	I0708 19:29:12.809604   13764 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 19:29:12.811054   13764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 19:29:12.812597   13764 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:29:12.813976   13764 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:29:12.815480   13764 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 19:29:12.817060   13764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 19:29:12.818707   13764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 19:29:12.850625   13764 out.go:177] * Using the kvm2 driver based on user configuration
	I0708 19:29:12.851864   13764 start.go:297] selected driver: kvm2
	I0708 19:29:12.851880   13764 start.go:901] validating driver "kvm2" against <nil>
	I0708 19:29:12.851891   13764 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 19:29:12.852594   13764 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 19:29:12.852671   13764 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 19:29:12.867676   13764 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 19:29:12.867735   13764 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 19:29:12.868003   13764 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 19:29:12.868082   13764 cni.go:84] Creating CNI manager for ""
	I0708 19:29:12.868099   13764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 19:29:12.868111   13764 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 19:29:12.868185   13764 start.go:340] cluster config:
	{Name:addons-268316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-268316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:29:12.868312   13764 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 19:29:12.871172   13764 out.go:177] * Starting "addons-268316" primary control-plane node in "addons-268316" cluster
	I0708 19:29:12.872622   13764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:29:12.872659   13764 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 19:29:12.872666   13764 cache.go:56] Caching tarball of preloaded images
	I0708 19:29:12.872735   13764 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 19:29:12.872744   13764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 19:29:12.873042   13764 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/config.json ...
	I0708 19:29:12.873061   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/config.json: {Name:mk16b7cb24f23e9d6b1a688b3b1b6627cd8a91c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:12.873215   13764 start.go:360] acquireMachinesLock for addons-268316: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 19:29:12.873261   13764 start.go:364] duration metric: took 33.304µs to acquireMachinesLock for "addons-268316"
	I0708 19:29:12.873278   13764 start.go:93] Provisioning new machine with config: &{Name:addons-268316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-268316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:29:12.873330   13764 start.go:125] createHost starting for "" (driver="kvm2")
	I0708 19:29:12.874996   13764 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0708 19:29:12.875125   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:29:12.875168   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:29:12.889448   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0708 19:29:12.889900   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:29:12.890466   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:29:12.890482   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:29:12.890773   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:29:12.890969   13764 main.go:141] libmachine: (addons-268316) Calling .GetMachineName
	I0708 19:29:12.891097   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:12.891248   13764 start.go:159] libmachine.API.Create for "addons-268316" (driver="kvm2")
	I0708 19:29:12.891280   13764 client.go:168] LocalClient.Create starting
	I0708 19:29:12.891326   13764 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 19:29:13.345276   13764 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 19:29:13.434731   13764 main.go:141] libmachine: Running pre-create checks...
	I0708 19:29:13.434757   13764 main.go:141] libmachine: (addons-268316) Calling .PreCreateCheck
	I0708 19:29:13.435305   13764 main.go:141] libmachine: (addons-268316) Calling .GetConfigRaw
	I0708 19:29:13.435760   13764 main.go:141] libmachine: Creating machine...
	I0708 19:29:13.435777   13764 main.go:141] libmachine: (addons-268316) Calling .Create
	I0708 19:29:13.435962   13764 main.go:141] libmachine: (addons-268316) Creating KVM machine...
	I0708 19:29:13.437298   13764 main.go:141] libmachine: (addons-268316) DBG | found existing default KVM network
	I0708 19:29:13.438154   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:13.438024   13786 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0708 19:29:13.438207   13764 main.go:141] libmachine: (addons-268316) DBG | created network xml: 
	I0708 19:29:13.438228   13764 main.go:141] libmachine: (addons-268316) DBG | <network>
	I0708 19:29:13.438235   13764 main.go:141] libmachine: (addons-268316) DBG |   <name>mk-addons-268316</name>
	I0708 19:29:13.438242   13764 main.go:141] libmachine: (addons-268316) DBG |   <dns enable='no'/>
	I0708 19:29:13.438248   13764 main.go:141] libmachine: (addons-268316) DBG |   
	I0708 19:29:13.438256   13764 main.go:141] libmachine: (addons-268316) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0708 19:29:13.438264   13764 main.go:141] libmachine: (addons-268316) DBG |     <dhcp>
	I0708 19:29:13.438270   13764 main.go:141] libmachine: (addons-268316) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0708 19:29:13.438299   13764 main.go:141] libmachine: (addons-268316) DBG |     </dhcp>
	I0708 19:29:13.438313   13764 main.go:141] libmachine: (addons-268316) DBG |   </ip>
	I0708 19:29:13.438321   13764 main.go:141] libmachine: (addons-268316) DBG |   
	I0708 19:29:13.438335   13764 main.go:141] libmachine: (addons-268316) DBG | </network>
	I0708 19:29:13.438349   13764 main.go:141] libmachine: (addons-268316) DBG | 
	I0708 19:29:13.443833   13764 main.go:141] libmachine: (addons-268316) DBG | trying to create private KVM network mk-addons-268316 192.168.39.0/24...
	I0708 19:29:13.509625   13764 main.go:141] libmachine: (addons-268316) DBG | private KVM network mk-addons-268316 192.168.39.0/24 created
	I0708 19:29:13.509661   13764 main.go:141] libmachine: (addons-268316) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316 ...
	I0708 19:29:13.509684   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:13.509601   13786 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:29:13.509705   13764 main.go:141] libmachine: (addons-268316) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 19:29:13.509785   13764 main.go:141] libmachine: (addons-268316) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 19:29:13.754837   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:13.754690   13786 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa...
	I0708 19:29:13.824387   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:13.824259   13786 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/addons-268316.rawdisk...
	I0708 19:29:13.824416   13764 main.go:141] libmachine: (addons-268316) DBG | Writing magic tar header
	I0708 19:29:13.824426   13764 main.go:141] libmachine: (addons-268316) DBG | Writing SSH key tar header
	I0708 19:29:13.824434   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:13.824379   13786 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316 ...
	I0708 19:29:13.824559   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316 (perms=drwx------)
	I0708 19:29:13.824589   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316
	I0708 19:29:13.824601   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 19:29:13.824611   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 19:29:13.824626   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:29:13.824636   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 19:29:13.824652   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 19:29:13.824667   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 19:29:13.824676   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home/jenkins
	I0708 19:29:13.824691   13764 main.go:141] libmachine: (addons-268316) DBG | Checking permissions on dir: /home
	I0708 19:29:13.824702   13764 main.go:141] libmachine: (addons-268316) DBG | Skipping /home - not owner
	I0708 19:29:13.824753   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 19:29:13.824800   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 19:29:13.824816   13764 main.go:141] libmachine: (addons-268316) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 19:29:13.824830   13764 main.go:141] libmachine: (addons-268316) Creating domain...
	I0708 19:29:13.825723   13764 main.go:141] libmachine: (addons-268316) define libvirt domain using xml: 
	I0708 19:29:13.825751   13764 main.go:141] libmachine: (addons-268316) <domain type='kvm'>
	I0708 19:29:13.825761   13764 main.go:141] libmachine: (addons-268316)   <name>addons-268316</name>
	I0708 19:29:13.825769   13764 main.go:141] libmachine: (addons-268316)   <memory unit='MiB'>4000</memory>
	I0708 19:29:13.825777   13764 main.go:141] libmachine: (addons-268316)   <vcpu>2</vcpu>
	I0708 19:29:13.825782   13764 main.go:141] libmachine: (addons-268316)   <features>
	I0708 19:29:13.825790   13764 main.go:141] libmachine: (addons-268316)     <acpi/>
	I0708 19:29:13.825799   13764 main.go:141] libmachine: (addons-268316)     <apic/>
	I0708 19:29:13.825807   13764 main.go:141] libmachine: (addons-268316)     <pae/>
	I0708 19:29:13.825814   13764 main.go:141] libmachine: (addons-268316)     
	I0708 19:29:13.825845   13764 main.go:141] libmachine: (addons-268316)   </features>
	I0708 19:29:13.825866   13764 main.go:141] libmachine: (addons-268316)   <cpu mode='host-passthrough'>
	I0708 19:29:13.825894   13764 main.go:141] libmachine: (addons-268316)   
	I0708 19:29:13.825925   13764 main.go:141] libmachine: (addons-268316)   </cpu>
	I0708 19:29:13.825935   13764 main.go:141] libmachine: (addons-268316)   <os>
	I0708 19:29:13.825943   13764 main.go:141] libmachine: (addons-268316)     <type>hvm</type>
	I0708 19:29:13.825949   13764 main.go:141] libmachine: (addons-268316)     <boot dev='cdrom'/>
	I0708 19:29:13.825959   13764 main.go:141] libmachine: (addons-268316)     <boot dev='hd'/>
	I0708 19:29:13.825968   13764 main.go:141] libmachine: (addons-268316)     <bootmenu enable='no'/>
	I0708 19:29:13.825978   13764 main.go:141] libmachine: (addons-268316)   </os>
	I0708 19:29:13.825986   13764 main.go:141] libmachine: (addons-268316)   <devices>
	I0708 19:29:13.826002   13764 main.go:141] libmachine: (addons-268316)     <disk type='file' device='cdrom'>
	I0708 19:29:13.826018   13764 main.go:141] libmachine: (addons-268316)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/boot2docker.iso'/>
	I0708 19:29:13.826031   13764 main.go:141] libmachine: (addons-268316)       <target dev='hdc' bus='scsi'/>
	I0708 19:29:13.826040   13764 main.go:141] libmachine: (addons-268316)       <readonly/>
	I0708 19:29:13.826045   13764 main.go:141] libmachine: (addons-268316)     </disk>
	I0708 19:29:13.826052   13764 main.go:141] libmachine: (addons-268316)     <disk type='file' device='disk'>
	I0708 19:29:13.826063   13764 main.go:141] libmachine: (addons-268316)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 19:29:13.826082   13764 main.go:141] libmachine: (addons-268316)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/addons-268316.rawdisk'/>
	I0708 19:29:13.826094   13764 main.go:141] libmachine: (addons-268316)       <target dev='hda' bus='virtio'/>
	I0708 19:29:13.826114   13764 main.go:141] libmachine: (addons-268316)     </disk>
	I0708 19:29:13.826125   13764 main.go:141] libmachine: (addons-268316)     <interface type='network'>
	I0708 19:29:13.826138   13764 main.go:141] libmachine: (addons-268316)       <source network='mk-addons-268316'/>
	I0708 19:29:13.826149   13764 main.go:141] libmachine: (addons-268316)       <model type='virtio'/>
	I0708 19:29:13.826161   13764 main.go:141] libmachine: (addons-268316)     </interface>
	I0708 19:29:13.826171   13764 main.go:141] libmachine: (addons-268316)     <interface type='network'>
	I0708 19:29:13.826183   13764 main.go:141] libmachine: (addons-268316)       <source network='default'/>
	I0708 19:29:13.826194   13764 main.go:141] libmachine: (addons-268316)       <model type='virtio'/>
	I0708 19:29:13.826206   13764 main.go:141] libmachine: (addons-268316)     </interface>
	I0708 19:29:13.826216   13764 main.go:141] libmachine: (addons-268316)     <serial type='pty'>
	I0708 19:29:13.826232   13764 main.go:141] libmachine: (addons-268316)       <target port='0'/>
	I0708 19:29:13.826245   13764 main.go:141] libmachine: (addons-268316)     </serial>
	I0708 19:29:13.826253   13764 main.go:141] libmachine: (addons-268316)     <console type='pty'>
	I0708 19:29:13.826264   13764 main.go:141] libmachine: (addons-268316)       <target type='serial' port='0'/>
	I0708 19:29:13.826272   13764 main.go:141] libmachine: (addons-268316)     </console>
	I0708 19:29:13.826276   13764 main.go:141] libmachine: (addons-268316)     <rng model='virtio'>
	I0708 19:29:13.826285   13764 main.go:141] libmachine: (addons-268316)       <backend model='random'>/dev/random</backend>
	I0708 19:29:13.826292   13764 main.go:141] libmachine: (addons-268316)     </rng>
	I0708 19:29:13.826297   13764 main.go:141] libmachine: (addons-268316)     
	I0708 19:29:13.826308   13764 main.go:141] libmachine: (addons-268316)     
	I0708 19:29:13.826315   13764 main.go:141] libmachine: (addons-268316)   </devices>
	I0708 19:29:13.826320   13764 main.go:141] libmachine: (addons-268316) </domain>
	I0708 19:29:13.826343   13764 main.go:141] libmachine: (addons-268316) 
	I0708 19:29:13.831896   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:5f:ef:35 in network default
	I0708 19:29:13.832463   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:13.832503   13764 main.go:141] libmachine: (addons-268316) Ensuring networks are active...
	I0708 19:29:13.833151   13764 main.go:141] libmachine: (addons-268316) Ensuring network default is active
	I0708 19:29:13.833457   13764 main.go:141] libmachine: (addons-268316) Ensuring network mk-addons-268316 is active
	I0708 19:29:13.834053   13764 main.go:141] libmachine: (addons-268316) Getting domain xml...
	I0708 19:29:13.834844   13764 main.go:141] libmachine: (addons-268316) Creating domain...
	I0708 19:29:15.232307   13764 main.go:141] libmachine: (addons-268316) Waiting to get IP...
	I0708 19:29:15.233240   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:15.233767   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:15.233791   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:15.233740   13786 retry.go:31] will retry after 306.13701ms: waiting for machine to come up
	I0708 19:29:15.541108   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:15.541535   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:15.541554   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:15.541494   13786 retry.go:31] will retry after 297.323999ms: waiting for machine to come up
	I0708 19:29:15.839831   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:15.840232   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:15.840259   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:15.840178   13786 retry.go:31] will retry after 456.898587ms: waiting for machine to come up
	I0708 19:29:16.298829   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:16.299238   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:16.299261   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:16.299185   13786 retry.go:31] will retry after 415.573876ms: waiting for machine to come up
	I0708 19:29:16.716754   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:16.717134   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:16.717173   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:16.717076   13786 retry.go:31] will retry after 520.428467ms: waiting for machine to come up
	I0708 19:29:17.239014   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:17.239555   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:17.239588   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:17.239518   13786 retry.go:31] will retry after 669.632948ms: waiting for machine to come up
	I0708 19:29:17.911160   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:17.911608   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:17.911631   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:17.911568   13786 retry.go:31] will retry after 1.141733478s: waiting for machine to come up
	I0708 19:29:19.054876   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:19.055391   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:19.055412   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:19.055352   13786 retry.go:31] will retry after 974.557592ms: waiting for machine to come up
	I0708 19:29:20.031693   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:20.032130   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:20.032174   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:20.032108   13786 retry.go:31] will retry after 1.303729308s: waiting for machine to come up
	I0708 19:29:21.337418   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:21.337813   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:21.337833   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:21.337779   13786 retry.go:31] will retry after 2.103034523s: waiting for machine to come up
	I0708 19:29:23.441869   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:23.442401   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:23.442428   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:23.442341   13786 retry.go:31] will retry after 2.055610278s: waiting for machine to come up
	I0708 19:29:25.500460   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:25.500781   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:25.500804   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:25.500741   13786 retry.go:31] will retry after 2.588112058s: waiting for machine to come up
	I0708 19:29:28.089986   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:28.090395   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:28.090413   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:28.090353   13786 retry.go:31] will retry after 2.767394929s: waiting for machine to come up
	I0708 19:29:30.861280   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:30.861656   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find current IP address of domain addons-268316 in network mk-addons-268316
	I0708 19:29:30.861684   13764 main.go:141] libmachine: (addons-268316) DBG | I0708 19:29:30.861604   13786 retry.go:31] will retry after 3.925819648s: waiting for machine to come up
	I0708 19:29:34.789404   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:34.789865   13764 main.go:141] libmachine: (addons-268316) Found IP for machine: 192.168.39.231
	I0708 19:29:34.789888   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has current primary IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:34.789894   13764 main.go:141] libmachine: (addons-268316) Reserving static IP address...
	I0708 19:29:34.790335   13764 main.go:141] libmachine: (addons-268316) DBG | unable to find host DHCP lease matching {name: "addons-268316", mac: "52:54:00:43:46:2e", ip: "192.168.39.231"} in network mk-addons-268316
	I0708 19:29:34.861738   13764 main.go:141] libmachine: (addons-268316) Reserved static IP address: 192.168.39.231
	I0708 19:29:34.861777   13764 main.go:141] libmachine: (addons-268316) DBG | Getting to WaitForSSH function...
	I0708 19:29:34.861786   13764 main.go:141] libmachine: (addons-268316) Waiting for SSH to be available...
	I0708 19:29:34.864294   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:34.864943   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:34.864967   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:34.865185   13764 main.go:141] libmachine: (addons-268316) DBG | Using SSH client type: external
	I0708 19:29:34.865209   13764 main.go:141] libmachine: (addons-268316) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa (-rw-------)
	I0708 19:29:34.865244   13764 main.go:141] libmachine: (addons-268316) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 19:29:34.865264   13764 main.go:141] libmachine: (addons-268316) DBG | About to run SSH command:
	I0708 19:29:34.865293   13764 main.go:141] libmachine: (addons-268316) DBG | exit 0
	I0708 19:29:35.000138   13764 main.go:141] libmachine: (addons-268316) DBG | SSH cmd err, output: <nil>: 
	I0708 19:29:35.000434   13764 main.go:141] libmachine: (addons-268316) KVM machine creation complete!
	I0708 19:29:35.000759   13764 main.go:141] libmachine: (addons-268316) Calling .GetConfigRaw
	I0708 19:29:35.001272   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:35.001471   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:35.001621   13764 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 19:29:35.001635   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:29:35.002809   13764 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 19:29:35.002825   13764 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 19:29:35.002837   13764 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 19:29:35.002843   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.005239   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.005513   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.005538   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.005658   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:35.005825   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.005984   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.006149   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:35.006304   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:35.006479   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:35.006490   13764 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 19:29:35.123016   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:29:35.123043   13764 main.go:141] libmachine: Detecting the provisioner...
	I0708 19:29:35.123054   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.127185   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.127572   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.127605   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.127736   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:35.127957   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.128148   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.128296   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:35.128450   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:35.128652   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:35.128671   13764 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 19:29:35.244615   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 19:29:35.244702   13764 main.go:141] libmachine: found compatible host: buildroot
	I0708 19:29:35.244714   13764 main.go:141] libmachine: Provisioning with buildroot...
	I0708 19:29:35.244724   13764 main.go:141] libmachine: (addons-268316) Calling .GetMachineName
	I0708 19:29:35.245014   13764 buildroot.go:166] provisioning hostname "addons-268316"
	I0708 19:29:35.245039   13764 main.go:141] libmachine: (addons-268316) Calling .GetMachineName
	I0708 19:29:35.245229   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.248071   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.248519   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.248544   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.248744   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:35.248969   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.249166   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.249315   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:35.249455   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:35.249623   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:35.249643   13764 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-268316 && echo "addons-268316" | sudo tee /etc/hostname
	I0708 19:29:35.379013   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-268316
	
	I0708 19:29:35.379051   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.382288   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.382657   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.382692   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.382919   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:35.383115   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.383268   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.383415   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:35.383597   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:35.383760   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:35.383776   13764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-268316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-268316/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-268316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 19:29:35.509773   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:29:35.509798   13764 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 19:29:35.509818   13764 buildroot.go:174] setting up certificates
	I0708 19:29:35.509838   13764 provision.go:84] configureAuth start
	I0708 19:29:35.509847   13764 main.go:141] libmachine: (addons-268316) Calling .GetMachineName
	I0708 19:29:35.510133   13764 main.go:141] libmachine: (addons-268316) Calling .GetIP
	I0708 19:29:35.512876   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.513246   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.513277   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.513402   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.515506   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.515875   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.515911   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.516070   13764 provision.go:143] copyHostCerts
	I0708 19:29:35.516131   13764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 19:29:35.516277   13764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 19:29:35.516337   13764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 19:29:35.516403   13764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.addons-268316 san=[127.0.0.1 192.168.39.231 addons-268316 localhost minikube]
	I0708 19:29:35.849960   13764 provision.go:177] copyRemoteCerts
	I0708 19:29:35.850015   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 19:29:35.850034   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:35.852585   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.852861   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:35.852887   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:35.853046   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:35.853231   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:35.853375   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:35.853478   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:29:35.942985   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 19:29:35.970444   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 19:29:35.995509   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0708 19:29:36.021443   13764 provision.go:87] duration metric: took 511.590281ms to configureAuth
	I0708 19:29:36.021480   13764 buildroot.go:189] setting minikube options for container-runtime
	I0708 19:29:36.021696   13764 config.go:182] Loaded profile config "addons-268316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:29:36.021786   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:36.024731   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.025122   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.025159   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.025303   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:36.025546   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.025771   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.025933   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:36.026161   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:36.026370   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:36.026393   13764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 19:29:36.450335   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 19:29:36.450358   13764 main.go:141] libmachine: Checking connection to Docker...
	I0708 19:29:36.450366   13764 main.go:141] libmachine: (addons-268316) Calling .GetURL
	I0708 19:29:36.451405   13764 main.go:141] libmachine: (addons-268316) DBG | Using libvirt version 6000000
	I0708 19:29:36.453720   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.454074   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.454095   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.454289   13764 main.go:141] libmachine: Docker is up and running!
	I0708 19:29:36.454306   13764 main.go:141] libmachine: Reticulating splines...
	I0708 19:29:36.454312   13764 client.go:171] duration metric: took 23.563023008s to LocalClient.Create
	I0708 19:29:36.454333   13764 start.go:167] duration metric: took 23.563088586s to libmachine.API.Create "addons-268316"
	I0708 19:29:36.454349   13764 start.go:293] postStartSetup for "addons-268316" (driver="kvm2")
	I0708 19:29:36.454360   13764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 19:29:36.454375   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:36.454577   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 19:29:36.454600   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:36.456743   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.457104   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.457131   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.457289   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:36.457458   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.457688   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:36.457865   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:29:36.550754   13764 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 19:29:36.555548   13764 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 19:29:36.555581   13764 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 19:29:36.555655   13764 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 19:29:36.555684   13764 start.go:296] duration metric: took 101.328003ms for postStartSetup
	I0708 19:29:36.555725   13764 main.go:141] libmachine: (addons-268316) Calling .GetConfigRaw
	I0708 19:29:36.604342   13764 main.go:141] libmachine: (addons-268316) Calling .GetIP
	I0708 19:29:36.607210   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.607552   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.607594   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.607833   13764 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/config.json ...
	I0708 19:29:36.608008   13764 start.go:128] duration metric: took 23.734668795s to createHost
	I0708 19:29:36.608028   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:36.610293   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.610672   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.610699   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.610832   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:36.611032   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.611225   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.611369   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:36.611529   13764 main.go:141] libmachine: Using SSH client type: native
	I0708 19:29:36.611724   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0708 19:29:36.611739   13764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 19:29:36.729030   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720466976.702143206
	
	I0708 19:29:36.729055   13764 fix.go:216] guest clock: 1720466976.702143206
	I0708 19:29:36.729064   13764 fix.go:229] Guest: 2024-07-08 19:29:36.702143206 +0000 UTC Remote: 2024-07-08 19:29:36.608018885 +0000 UTC m=+23.838704072 (delta=94.124321ms)
	I0708 19:29:36.729110   13764 fix.go:200] guest clock delta is within tolerance: 94.124321ms
	I0708 19:29:36.729118   13764 start.go:83] releasing machines lock for "addons-268316", held for 23.855846693s
	I0708 19:29:36.729146   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:36.729424   13764 main.go:141] libmachine: (addons-268316) Calling .GetIP
	I0708 19:29:36.732044   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.732466   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.732492   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.732677   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:36.733152   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:36.733338   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:29:36.733424   13764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 19:29:36.733455   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:36.733556   13764 ssh_runner.go:195] Run: cat /version.json
	I0708 19:29:36.733585   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:29:36.736459   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.736765   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.736816   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.736837   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.737026   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:36.737108   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:36.737127   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:36.737242   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.737312   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:29:36.737458   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:36.737482   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:29:36.737650   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:29:36.737645   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:29:36.737804   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:29:36.816266   13764 ssh_runner.go:195] Run: systemctl --version
	I0708 19:29:36.846706   13764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 19:29:37.055702   13764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 19:29:37.061802   13764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 19:29:37.061882   13764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 19:29:37.079087   13764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 19:29:37.079109   13764 start.go:494] detecting cgroup driver to use...
	I0708 19:29:37.079181   13764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 19:29:37.097180   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 19:29:37.112232   13764 docker.go:217] disabling cri-docker service (if available) ...
	I0708 19:29:37.112289   13764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 19:29:37.126575   13764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 19:29:37.141094   13764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 19:29:37.261710   13764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 19:29:37.417238   13764 docker.go:233] disabling docker service ...
	I0708 19:29:37.417315   13764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 19:29:37.431461   13764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 19:29:37.443941   13764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 19:29:37.560663   13764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 19:29:37.678902   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 19:29:37.694003   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 19:29:37.713565   13764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 19:29:37.713638   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.724284   13764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 19:29:37.724367   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.734950   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.745884   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.756414   13764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 19:29:37.767047   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.777426   13764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.796222   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:29:37.806922   13764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 19:29:37.816639   13764 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 19:29:37.816698   13764 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 19:29:37.829347   13764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 19:29:37.838920   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:29:37.945497   13764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 19:29:38.090753   13764 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 19:29:38.090839   13764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 19:29:38.095412   13764 start.go:562] Will wait 60s for crictl version
	I0708 19:29:38.095501   13764 ssh_runner.go:195] Run: which crictl
	I0708 19:29:38.099181   13764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 19:29:38.138719   13764 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 19:29:38.138826   13764 ssh_runner.go:195] Run: crio --version
	I0708 19:29:38.167181   13764 ssh_runner.go:195] Run: crio --version
	I0708 19:29:38.197411   13764 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 19:29:38.198831   13764 main.go:141] libmachine: (addons-268316) Calling .GetIP
	I0708 19:29:38.201380   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:38.201695   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:29:38.201720   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:29:38.201899   13764 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 19:29:38.205967   13764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:29:38.218881   13764 kubeadm.go:877] updating cluster {Name:addons-268316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-268316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 19:29:38.218982   13764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:29:38.219023   13764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 19:29:38.250951   13764 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 19:29:38.251013   13764 ssh_runner.go:195] Run: which lz4
	I0708 19:29:38.255199   13764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 19:29:38.259632   13764 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 19:29:38.259661   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 19:29:39.590953   13764 crio.go:462] duration metric: took 1.335779016s to copy over tarball
	I0708 19:29:39.591045   13764 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 19:29:41.849602   13764 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.258531874s)
	I0708 19:29:41.849625   13764 crio.go:469] duration metric: took 2.258635163s to extract the tarball
	I0708 19:29:41.849631   13764 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 19:29:41.886974   13764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 19:29:41.927676   13764 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 19:29:41.927698   13764 cache_images.go:84] Images are preloaded, skipping loading
	I0708 19:29:41.927706   13764 kubeadm.go:928] updating node { 192.168.39.231 8443 v1.30.2 crio true true} ...
	I0708 19:29:41.927832   13764 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-268316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-268316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 19:29:41.927901   13764 ssh_runner.go:195] Run: crio config
	I0708 19:29:41.975249   13764 cni.go:84] Creating CNI manager for ""
	I0708 19:29:41.975268   13764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 19:29:41.975279   13764 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 19:29:41.975302   13764 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-268316 NodeName:addons-268316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 19:29:41.975490   13764 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-268316"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 19:29:41.975564   13764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 19:29:41.985569   13764 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 19:29:41.985634   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 19:29:41.995284   13764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0708 19:29:42.011893   13764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 19:29:42.028663   13764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0708 19:29:42.045293   13764 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I0708 19:29:42.049356   13764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:29:42.061843   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:29:42.196088   13764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:29:42.214687   13764 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316 for IP: 192.168.39.231
	I0708 19:29:42.214714   13764 certs.go:194] generating shared ca certs ...
	I0708 19:29:42.214736   13764 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.214897   13764 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 19:29:42.339367   13764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt ...
	I0708 19:29:42.339393   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt: {Name:mka05d1dc67457a4777c0b3766c00234c397468e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.339582   13764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key ...
	I0708 19:29:42.339600   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key: {Name:mk76fee786db566d7f6df1d0853aed58c25bc81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.339702   13764 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 19:29:42.458532   13764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt ...
	I0708 19:29:42.458559   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt: {Name:mkc8726977bf64262519c5d749001a3b31213a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.458739   13764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key ...
	I0708 19:29:42.458759   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key: {Name:mk4b7a4888e6f070dec0196575192264fc2860e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.458852   13764 certs.go:256] generating profile certs ...
	I0708 19:29:42.458917   13764 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.key
	I0708 19:29:42.458947   13764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt with IP's: []
	I0708 19:29:42.648669   13764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt ...
	I0708 19:29:42.648699   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: {Name:mk77ac657d40a5d25957426be28dc19433a1fb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.648883   13764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.key ...
	I0708 19:29:42.648897   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.key: {Name:mkcad793ce6a2810562e5b9e54a4148a2a5b1c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.648996   13764 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key.e1d3a00c
	I0708 19:29:42.649019   13764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt.e1d3a00c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231]
	I0708 19:29:42.793284   13764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt.e1d3a00c ...
	I0708 19:29:42.793313   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt.e1d3a00c: {Name:mk1de88b64afb0e9940cf1ca3c7888adeb37451a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.793483   13764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key.e1d3a00c ...
	I0708 19:29:42.793500   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key.e1d3a00c: {Name:mkd8c322a4a8cbf764b246990122ec9ebfd75ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:42.793592   13764 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt.e1d3a00c -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt
	I0708 19:29:42.793667   13764 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key.e1d3a00c -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key
	I0708 19:29:42.793710   13764 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.key
	I0708 19:29:42.793727   13764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.crt with IP's: []
	I0708 19:29:43.203977   13764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.crt ...
	I0708 19:29:43.204009   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.crt: {Name:mkce7f3d2364a421e69951326bec58c6360dcaf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:43.204172   13764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.key ...
	I0708 19:29:43.204182   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.key: {Name:mk403da6925ba5cfcfe7c85e5000cc2b8ff2127d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:29:43.204333   13764 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 19:29:43.204364   13764 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 19:29:43.204388   13764 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 19:29:43.204410   13764 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 19:29:43.204961   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 19:29:43.237802   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 19:29:43.265345   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 19:29:43.291776   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 19:29:43.317926   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0708 19:29:43.342970   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 19:29:43.368280   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 19:29:43.394357   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 19:29:43.421397   13764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 19:29:43.449031   13764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 19:29:43.467140   13764 ssh_runner.go:195] Run: openssl version
	I0708 19:29:43.473260   13764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 19:29:43.484318   13764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:29:43.489247   13764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:29:43.489308   13764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:29:43.495332   13764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 19:29:43.506102   13764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 19:29:43.510657   13764 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 19:29:43.510702   13764 kubeadm.go:391] StartCluster: {Name:addons-268316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-268316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:29:43.510783   13764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 19:29:43.510840   13764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 19:29:43.551609   13764 cri.go:89] found id: ""
	I0708 19:29:43.551683   13764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 19:29:43.561883   13764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 19:29:43.571360   13764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 19:29:43.580944   13764 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 19:29:43.580961   13764 kubeadm.go:156] found existing configuration files:
	
	I0708 19:29:43.581002   13764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 19:29:43.589770   13764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 19:29:43.589841   13764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 19:29:43.599390   13764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 19:29:43.608385   13764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 19:29:43.608447   13764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 19:29:43.617730   13764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 19:29:43.626794   13764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 19:29:43.626882   13764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 19:29:43.636379   13764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 19:29:43.645756   13764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 19:29:43.645822   13764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 19:29:43.655441   13764 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 19:29:43.718531   13764 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 19:29:43.718604   13764 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 19:29:43.869615   13764 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 19:29:43.869763   13764 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 19:29:43.869937   13764 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 19:29:44.091062   13764 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 19:29:44.185423   13764 out.go:204]   - Generating certificates and keys ...
	I0708 19:29:44.185565   13764 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 19:29:44.185687   13764 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 19:29:44.201618   13764 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 19:29:44.408651   13764 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 19:29:44.478821   13764 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 19:29:44.672509   13764 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 19:29:44.746144   13764 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 19:29:44.746515   13764 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-268316 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0708 19:29:44.817129   13764 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 19:29:44.817499   13764 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-268316 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0708 19:29:45.096918   13764 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 19:29:45.321890   13764 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 19:29:45.527132   13764 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 19:29:45.527248   13764 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 19:29:45.625388   13764 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 19:29:45.730386   13764 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 19:29:45.868662   13764 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 19:29:46.198138   13764 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 19:29:46.501663   13764 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 19:29:46.502352   13764 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 19:29:46.506763   13764 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 19:29:46.508659   13764 out.go:204]   - Booting up control plane ...
	I0708 19:29:46.508773   13764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 19:29:46.508873   13764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 19:29:46.508958   13764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 19:29:46.524248   13764 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 19:29:46.524753   13764 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 19:29:46.524804   13764 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 19:29:46.665379   13764 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 19:29:46.665491   13764 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 19:29:47.666712   13764 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002053293s
	I0708 19:29:47.666831   13764 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 19:29:52.167692   13764 kubeadm.go:309] [api-check] The API server is healthy after 4.501941351s
	I0708 19:29:52.181266   13764 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 19:29:52.210031   13764 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 19:29:52.234842   13764 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 19:29:52.235049   13764 kubeadm.go:309] [mark-control-plane] Marking the node addons-268316 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 19:29:52.252683   13764 kubeadm.go:309] [bootstrap-token] Using token: j9x0og.fuvsuxwqklap1dd2
	I0708 19:29:52.254033   13764 out.go:204]   - Configuring RBAC rules ...
	I0708 19:29:52.254141   13764 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 19:29:52.259514   13764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 19:29:52.266997   13764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 19:29:52.273776   13764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 19:29:52.277201   13764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 19:29:52.280617   13764 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 19:29:52.571772   13764 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 19:29:53.015880   13764 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 19:29:53.571643   13764 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 19:29:53.572647   13764 kubeadm.go:309] 
	I0708 19:29:53.572717   13764 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 19:29:53.572729   13764 kubeadm.go:309] 
	I0708 19:29:53.572796   13764 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 19:29:53.572807   13764 kubeadm.go:309] 
	I0708 19:29:53.572855   13764 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 19:29:53.572922   13764 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 19:29:53.573002   13764 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 19:29:53.573012   13764 kubeadm.go:309] 
	I0708 19:29:53.573078   13764 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 19:29:53.573088   13764 kubeadm.go:309] 
	I0708 19:29:53.573152   13764 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 19:29:53.573162   13764 kubeadm.go:309] 
	I0708 19:29:53.573263   13764 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 19:29:53.573368   13764 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 19:29:53.573454   13764 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 19:29:53.573471   13764 kubeadm.go:309] 
	I0708 19:29:53.573602   13764 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 19:29:53.573729   13764 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 19:29:53.573742   13764 kubeadm.go:309] 
	I0708 19:29:53.573845   13764 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token j9x0og.fuvsuxwqklap1dd2 \
	I0708 19:29:53.574085   13764 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 19:29:53.574123   13764 kubeadm.go:309] 	--control-plane 
	I0708 19:29:53.574128   13764 kubeadm.go:309] 
	I0708 19:29:53.574255   13764 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 19:29:53.574267   13764 kubeadm.go:309] 
	I0708 19:29:53.574365   13764 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token j9x0og.fuvsuxwqklap1dd2 \
	I0708 19:29:53.574503   13764 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 19:29:53.574710   13764 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 19:29:53.574885   13764 cni.go:84] Creating CNI manager for ""
	I0708 19:29:53.574903   13764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 19:29:53.576607   13764 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 19:29:53.577805   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 19:29:53.588597   13764 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 19:29:53.609331   13764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 19:29:53.609417   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:53.609450   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-268316 minikube.k8s.io/updated_at=2024_07_08T19_29_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=addons-268316 minikube.k8s.io/primary=true
	I0708 19:29:53.649803   13764 ops.go:34] apiserver oom_adj: -16
	I0708 19:29:53.750857   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:54.251017   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:54.750978   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:55.251894   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:55.751043   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:56.250936   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:56.750970   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:57.250919   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:57.751708   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:58.251803   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:58.751801   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:59.251031   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:29:59.751248   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:00.251466   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:00.751647   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:01.251947   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:01.751244   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:02.251058   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:02.750974   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:03.251612   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:03.751849   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:04.250978   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:04.751172   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:05.251249   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:05.750936   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:06.251927   13764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:30:06.335293   13764 kubeadm.go:1107] duration metric: took 12.725930219s to wait for elevateKubeSystemPrivileges
	W0708 19:30:06.335336   13764 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 19:30:06.335346   13764 kubeadm.go:393] duration metric: took 22.824647888s to StartCluster
	I0708 19:30:06.335367   13764 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:30:06.335534   13764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:30:06.335874   13764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:30:06.336081   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 19:30:06.336099   13764 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0708 19:30:06.336081   13764 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:30:06.336209   13764 addons.go:69] Setting default-storageclass=true in profile "addons-268316"
	I0708 19:30:06.336237   13764 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-268316"
	I0708 19:30:06.336254   13764 addons.go:69] Setting metrics-server=true in profile "addons-268316"
	I0708 19:30:06.336297   13764 addons.go:234] Setting addon metrics-server=true in "addons-268316"
	I0708 19:30:06.336306   13764 addons.go:69] Setting helm-tiller=true in profile "addons-268316"
	I0708 19:30:06.336309   13764 addons.go:69] Setting ingress-dns=true in profile "addons-268316"
	I0708 19:30:06.336331   13764 addons.go:234] Setting addon ingress-dns=true in "addons-268316"
	I0708 19:30:06.336335   13764 addons.go:234] Setting addon helm-tiller=true in "addons-268316"
	I0708 19:30:06.336345   13764 addons.go:69] Setting storage-provisioner=true in profile "addons-268316"
	I0708 19:30:06.336358   13764 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-268316"
	I0708 19:30:06.336367   13764 addons.go:234] Setting addon storage-provisioner=true in "addons-268316"
	I0708 19:30:06.336369   13764 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-268316"
	I0708 19:30:06.336372   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336376   13764 addons.go:69] Setting volumesnapshots=true in profile "addons-268316"
	I0708 19:30:06.336384   13764 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-268316"
	I0708 19:30:06.336349   13764 addons.go:69] Setting volcano=true in profile "addons-268316"
	I0708 19:30:06.336399   13764 addons.go:234] Setting addon volumesnapshots=true in "addons-268316"
	I0708 19:30:06.336399   13764 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-268316"
	I0708 19:30:06.336408   13764 addons.go:69] Setting registry=true in profile "addons-268316"
	I0708 19:30:06.336428   13764 addons.go:234] Setting addon registry=true in "addons-268316"
	I0708 19:30:06.336434   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336451   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336359   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336400   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336670   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336704   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.336385   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336759   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336796   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.336799   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336286   13764 addons.go:69] Setting ingress=true in profile "addons-268316"
	I0708 19:30:06.336205   13764 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-268316"
	I0708 19:30:06.336816   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336824   13764 addons.go:234] Setting addon ingress=true in "addons-268316"
	I0708 19:30:06.336829   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.336841   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.336849   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336335   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336932   13764 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-268316"
	I0708 19:30:06.336325   13764 addons.go:69] Setting gcp-auth=true in profile "addons-268316"
	I0708 19:30:06.336970   13764 mustload.go:65] Loading cluster: addons-268316
	I0708 19:30:06.336994   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337003   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337023   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337031   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.336403   13764 addons.go:234] Setting addon volcano=true in "addons-268316"
	I0708 19:30:06.336200   13764 addons.go:69] Setting cloud-spanner=true in profile "addons-268316"
	I0708 19:30:06.336298   13764 config.go:182] Loaded profile config "addons-268316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:30:06.337081   13764 addons.go:234] Setting addon cloud-spanner=true in "addons-268316"
	I0708 19:30:06.337083   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336194   13764 addons.go:69] Setting yakd=true in profile "addons-268316"
	I0708 19:30:06.337101   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337105   13764 addons.go:234] Setting addon yakd=true in "addons-268316"
	I0708 19:30:06.337216   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.336351   13764 addons.go:69] Setting inspektor-gadget=true in profile "addons-268316"
	I0708 19:30:06.337266   13764 addons.go:234] Setting addon inspektor-gadget=true in "addons-268316"
	I0708 19:30:06.337294   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.337312   13764 config.go:182] Loaded profile config "addons-268316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:30:06.337346   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337370   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337409   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.337626   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.336799   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337669   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337684   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337713   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.337669   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337800   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337809   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337833   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337652   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337229   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.337903   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.337915   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.338033   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.338146   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.338175   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.339580   13764 out.go:177] * Verifying Kubernetes components...
	I0708 19:30:06.341606   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:30:06.357834   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45199
	I0708 19:30:06.357883   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I0708 19:30:06.357850   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I0708 19:30:06.358115   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39835
	I0708 19:30:06.358739   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0708 19:30:06.358845   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0708 19:30:06.358927   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.358976   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.359034   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.359057   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.359112   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.359169   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.359469   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.359486   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.359567   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.359578   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.359601   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.359612   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.359686   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.359697   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.359712   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.359722   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.359881   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.359919   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.359930   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.360062   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.360441   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.360469   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.360509   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.360531   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.363638   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.363712   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.363735   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.363751   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.363929   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.363951   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.364448   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.364490   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.364719   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.364749   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.366315   13764 addons.go:234] Setting addon default-storageclass=true in "addons-268316"
	I0708 19:30:06.366358   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.366695   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.366729   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.367023   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.367542   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.372476   13764 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-268316"
	I0708 19:30:06.372518   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.372866   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.372900   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.402888   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0708 19:30:06.403512   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.403599   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I0708 19:30:06.403953   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.404286   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.404308   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.404453   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.404464   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.404815   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.404924   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.405478   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.405519   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.405721   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.405798   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I0708 19:30:06.406638   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.408319   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.408739   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44639
	I0708 19:30:06.408999   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.409012   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.409067   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.409370   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.409966   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.409989   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.410199   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I0708 19:30:06.410329   13764 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.1
	I0708 19:30:06.410556   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.411058   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.411076   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.411396   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.411595   13764 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0708 19:30:06.411615   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0708 19:30:06.411634   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.411600   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.411689   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.411752   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.412002   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.412522   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.412563   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.413702   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0708 19:30:06.413703   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:06.414105   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.414136   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.414793   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I0708 19:30:06.414946   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0708 19:30:06.415254   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.415543   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.415907   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.415923   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.416204   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.416219   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.416288   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.416568   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.417125   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.417162   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.417245   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.417410   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.418118   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I0708 19:30:06.418504   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.418872   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.418898   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.419041   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.419181   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.419207   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.419477   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.419535   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.419964   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.419985   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.420058   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.420108   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.420263   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.420417   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.420415   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.420455   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.420417   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.421017   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.421051   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.423035   13764 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0708 19:30:06.424742   13764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0708 19:30:06.424762   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0708 19:30:06.424784   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.428563   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.428910   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.428933   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.429194   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.429417   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.429606   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.429766   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.431199   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0708 19:30:06.432259   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.433945   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46615
	I0708 19:30:06.434415   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.434524   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38963
	I0708 19:30:06.434882   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.435048   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.435060   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.435517   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.435707   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.436715   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.436733   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.437291   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.437345   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.438316   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.438354   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.438845   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.438878   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.439414   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.439577   13764 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0708 19:30:06.439662   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.441170   13764 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0708 19:30:06.441196   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0708 19:30:06.441217   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.441354   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.443137   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0708 19:30:06.443340   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0708 19:30:06.443827   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.444108   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.444626   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0708 19:30:06.444641   13764 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0708 19:30:06.444640   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.444670   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.444682   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.444687   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.444866   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.444986   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.445078   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.445605   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.445623   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.446406   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.447272   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.447298   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.448739   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.449104   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.449126   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.449811   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.450112   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.450296   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.450434   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.453155   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
	I0708 19:30:06.453613   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.453707   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0708 19:30:06.453794   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39479
	I0708 19:30:06.453918   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46453
	I0708 19:30:06.454572   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.454593   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.454662   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.455007   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.455019   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.455063   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.455389   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.455479   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.455657   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.455991   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.456001   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.456053   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I0708 19:30:06.456195   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39409
	I0708 19:30:06.456309   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.456340   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.456556   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.456646   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.456850   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.456911   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.457106   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.457126   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.457434   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.458961   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.459028   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.459041   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.459056   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.459080   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0708 19:30:06.459185   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.459195   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.459824   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.459993   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.460518   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.460551   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.461166   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.461228   13764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0708 19:30:06.461303   13764 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0708 19:30:06.461463   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.461589   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.461968   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.461983   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.462635   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.462668   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.463347   13764 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 19:30:06.463368   13764 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 19:30:06.463388   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.463446   13764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0708 19:30:06.463547   13764 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 19:30:06.464705   13764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0708 19:30:06.464845   13764 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 19:30:06.464860   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 19:30:06.464876   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.466240   13764 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0708 19:30:06.466260   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0708 19:30:06.466275   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.467713   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.468484   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.468505   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.468725   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.468888   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.469102   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.469117   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0708 19:30:06.469310   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.469720   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.470413   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.470429   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.470961   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.471236   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.471972   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.472523   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:06.472562   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:06.472978   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.473976   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.474018   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.474676   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.474700   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.474727   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.474746   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.474765   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.474843   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.474994   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.475142   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.475264   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.475471   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.475613   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.475726   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.476506   13764 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0708 19:30:06.477871   13764 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0708 19:30:06.477884   13764 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0708 19:30:06.477897   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.480523   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I0708 19:30:06.481008   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.481424   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.481571   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.481583   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.481985   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.482325   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.482333   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.482345   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.482984   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.483148   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.483280   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.483383   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0708 19:30:06.483522   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I0708 19:30:06.483654   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.483978   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.484213   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.484384   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.484396   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.484451   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.484771   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.484799   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.484857   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.485697   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.485714   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.485952   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.486697   13764 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0708 19:30:06.487557   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.487797   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I0708 19:30:06.488024   13764 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0708 19:30:06.488042   13764 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0708 19:30:06.488060   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.488263   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.488336   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.489215   13764 out.go:177]   - Using image docker.io/busybox:stable
	I0708 19:30:06.489324   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.489664   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.490093   13764 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0708 19:30:06.490147   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.490715   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.491421   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.492400   13764 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0708 19:30:06.492420   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0708 19:30:06.492437   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.492504   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.493165   13764 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0708 19:30:06.494092   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0708 19:30:06.494196   13764 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0708 19:30:06.494214   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0708 19:30:06.494230   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.494955   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.494989   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.495025   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.495182   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.495403   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.495595   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.495743   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.495888   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.496043   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.496142   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.496422   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.496644   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.496699   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0708 19:30:06.496770   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.497660   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.498238   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.498272   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.498451   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.498595   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.498727   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.498819   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.499236   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0708 19:30:06.500525   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0708 19:30:06.501767   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0708 19:30:06.503162   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	W0708 19:30:06.504006   13764 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34756->192.168.39.231:22: read: connection reset by peer
	I0708 19:30:06.504035   13764 retry.go:31] will retry after 149.850572ms: ssh: handshake failed: read tcp 192.168.39.1:34756->192.168.39.231:22: read: connection reset by peer
	I0708 19:30:06.504680   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0708 19:30:06.505032   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.505480   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.505499   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.505870   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.506034   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0708 19:30:06.506195   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.507914   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.508283   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0708 19:30:06.508349   13764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0708 19:30:06.509365   13764 out.go:177]   - Using image docker.io/registry:2.8.3
	I0708 19:30:06.509438   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0708 19:30:06.509453   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0708 19:30:06.509472   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.512138   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38287
	I0708 19:30:06.512290   13764 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0708 19:30:06.512871   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.513322   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.513364   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.513472   13764 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0708 19:30:06.513490   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0708 19:30:06.513508   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.513563   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.513706   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.513832   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.513925   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.517049   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.517418   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.517448   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.517589   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.517785   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.517920   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.518058   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:06.536016   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.536028   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:06.536545   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.536555   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:06.536566   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.536573   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:06.536891   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.536984   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:06.537107   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.537181   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:06.538810   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.538913   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:06.539074   13764 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 19:30:06.539087   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:06.539091   13764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 19:30:06.539107   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:06.539110   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:06.539266   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:06.539277   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:06.539287   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:06.539294   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:06.539466   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:06.539481   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	W0708 19:30:06.539575   13764 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0708 19:30:06.541925   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.542377   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:06.542401   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:06.542580   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:06.542768   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:06.542941   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:06.543076   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	W0708 19:30:06.546037   13764 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34786->192.168.39.231:22: read: connection reset by peer
	I0708 19:30:06.546065   13764 retry.go:31] will retry after 329.605991ms: ssh: handshake failed: read tcp 192.168.39.1:34786->192.168.39.231:22: read: connection reset by peer
	W0708 19:30:06.654795   13764 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34796->192.168.39.231:22: read: connection reset by peer
	I0708 19:30:06.654833   13764 retry.go:31] will retry after 494.30651ms: ssh: handshake failed: read tcp 192.168.39.1:34796->192.168.39.231:22: read: connection reset by peer
	I0708 19:30:06.844238   13764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:30:06.844458   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0708 19:30:06.856509   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0708 19:30:06.894179   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0708 19:30:06.894204   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0708 19:30:06.932368   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0708 19:30:06.983456   13764 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0708 19:30:06.983481   13764 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0708 19:30:07.067767   13764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0708 19:30:07.067791   13764 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0708 19:30:07.071690   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0708 19:30:07.076524   13764 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0708 19:30:07.076548   13764 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0708 19:30:07.091083   13764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 19:30:07.091101   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0708 19:30:07.098130   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 19:30:07.102426   13764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0708 19:30:07.102445   13764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0708 19:30:07.145261   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0708 19:30:07.153913   13764 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0708 19:30:07.153939   13764 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0708 19:30:07.171243   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0708 19:30:07.171268   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0708 19:30:07.186774   13764 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0708 19:30:07.186793   13764 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0708 19:30:07.246384   13764 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0708 19:30:07.246403   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0708 19:30:07.251512   13764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 19:30:07.251530   13764 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 19:30:07.306689   13764 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0708 19:30:07.306708   13764 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0708 19:30:07.311245   13764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0708 19:30:07.311262   13764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0708 19:30:07.312372   13764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0708 19:30:07.312387   13764 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0708 19:30:07.417363   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0708 19:30:07.417390   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0708 19:30:07.450888   13764 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0708 19:30:07.450919   13764 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0708 19:30:07.509759   13764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0708 19:30:07.509783   13764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0708 19:30:07.527049   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0708 19:30:07.550790   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0708 19:30:07.574174   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 19:30:07.587239   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0708 19:30:07.587268   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0708 19:30:07.627694   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0708 19:30:07.634258   13764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 19:30:07.634277   13764 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 19:30:07.659103   13764 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0708 19:30:07.659143   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0708 19:30:07.671612   13764 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0708 19:30:07.671635   13764 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0708 19:30:07.681561   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0708 19:30:07.681588   13764 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0708 19:30:07.752596   13764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0708 19:30:07.752620   13764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0708 19:30:07.789562   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 19:30:07.816130   13764 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0708 19:30:07.816158   13764 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0708 19:30:07.852905   13764 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 19:30:07.852927   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0708 19:30:07.959574   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0708 19:30:08.016334   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0708 19:30:08.016358   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0708 19:30:08.116108   13764 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0708 19:30:08.116135   13764 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0708 19:30:08.222279   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 19:30:08.309838   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0708 19:30:08.309873   13764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0708 19:30:08.466384   13764 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0708 19:30:08.466410   13764 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0708 19:30:08.574391   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0708 19:30:08.574422   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0708 19:30:08.860952   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0708 19:30:08.860974   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0708 19:30:08.877504   13764 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0708 19:30:08.877522   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0708 19:30:09.143815   13764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0708 19:30:09.143859   13764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0708 19:30:09.171978   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0708 19:30:09.390006   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0708 19:30:09.419915   13764 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.575427836s)
	I0708 19:30:09.419951   13764 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0708 19:30:09.419955   13764 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.575676531s)
	I0708 19:30:09.420010   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.563444318s)
	I0708 19:30:09.420055   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:09.420081   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:09.420391   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:09.420407   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:09.420417   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:09.420425   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:09.420445   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:09.420895   13764 node_ready.go:35] waiting up to 6m0s for node "addons-268316" to be "Ready" ...
	I0708 19:30:09.421067   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:09.421069   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:09.421087   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:09.425119   13764 node_ready.go:49] node "addons-268316" has status "Ready":"True"
	I0708 19:30:09.425143   13764 node_ready.go:38] duration metric: took 4.234104ms for node "addons-268316" to be "Ready" ...
	I0708 19:30:09.425155   13764 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:30:09.442068   13764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:09.985500   13764 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-268316" context rescaled to 1 replicas
	I0708 19:30:10.789661   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.85725085s)
	I0708 19:30:10.789716   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:10.789729   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:10.790043   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:10.790084   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:10.790108   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:10.790122   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:10.790391   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:10.790450   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:10.790464   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:11.702987   13764 pod_ready.go:102] pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:13.483801   13764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0708 19:30:13.483837   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:13.486850   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:13.487287   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:13.487319   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:13.487522   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:13.487708   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:13.487849   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:13.488014   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:13.898299   13764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0708 19:30:13.952410   13764 pod_ready.go:102] pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:14.159311   13764 addons.go:234] Setting addon gcp-auth=true in "addons-268316"
	I0708 19:30:14.159373   13764 host.go:66] Checking if "addons-268316" exists ...
	I0708 19:30:14.159830   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:14.159876   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:14.175204   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0708 19:30:14.175677   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:14.176125   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:14.176147   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:14.176469   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:14.176930   13764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:30:14.176953   13764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:30:14.192542   13764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0708 19:30:14.192914   13764 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:30:14.193371   13764 main.go:141] libmachine: Using API Version  1
	I0708 19:30:14.193391   13764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:30:14.193695   13764 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:30:14.193912   13764 main.go:141] libmachine: (addons-268316) Calling .GetState
	I0708 19:30:14.195489   13764 main.go:141] libmachine: (addons-268316) Calling .DriverName
	I0708 19:30:14.195710   13764 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0708 19:30:14.195739   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHHostname
	I0708 19:30:14.198470   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:14.198871   13764 main.go:141] libmachine: (addons-268316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:46:2e", ip: ""} in network mk-addons-268316: {Iface:virbr1 ExpiryTime:2024-07-08 20:29:27 +0000 UTC Type:0 Mac:52:54:00:43:46:2e Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:addons-268316 Clientid:01:52:54:00:43:46:2e}
	I0708 19:30:14.198899   13764 main.go:141] libmachine: (addons-268316) DBG | domain addons-268316 has defined IP address 192.168.39.231 and MAC address 52:54:00:43:46:2e in network mk-addons-268316
	I0708 19:30:14.199009   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHPort
	I0708 19:30:14.199163   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHKeyPath
	I0708 19:30:14.199289   13764 main.go:141] libmachine: (addons-268316) Calling .GetSSHUsername
	I0708 19:30:14.199420   13764 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/addons-268316/id_rsa Username:docker}
	I0708 19:30:15.874006   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.802280101s)
	I0708 19:30:15.874061   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874073   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874101   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.775948161s)
	I0708 19:30:15.874126   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874137   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874147   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.728857792s)
	I0708 19:30:15.874190   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874203   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.347123523s)
	I0708 19:30:15.874231   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874243   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874257   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.323424151s)
	I0708 19:30:15.874276   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874208   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874286   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874365   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.246650808s)
	I0708 19:30:15.874382   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874390   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874413   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874427   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874438   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874446   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874558   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.300357136s)
	I0708 19:30:15.874609   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874631   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874482   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.084892969s)
	I0708 19:30:15.874610   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874694   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874711   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874738   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874637   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.874664   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874787   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.874795   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874808   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874817   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874832   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874847   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874863   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.874873   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.652556056s)
	I0708 19:30:15.874880   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874714   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874821   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874966   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874716   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.875005   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.875014   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874755   13764 addons.go:475] Verifying addon ingress=true in "addons-268316"
	I0708 19:30:15.875170   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.703037869s)
	I0708 19:30:15.875196   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.875204   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.875280   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.875305   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.875312   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.875321   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.875328   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.875330   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.875338   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.875340   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.876386   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.876424   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.876431   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.876439   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.876446   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.876518   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.876537   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.876543   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.876551   13764 addons.go:475] Verifying addon registry=true in "addons-268316"
	I0708 19:30:15.876899   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.876930   13764 out.go:177] * Verifying ingress addon...
	I0708 19:30:15.876962   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.876990   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.876998   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.874760   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.915151961s)
	I0708 19:30:15.877728   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.877739   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.874774   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.877869   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.877879   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.878246   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.878274   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.878281   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.878288   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.878296   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.878325   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.878347   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.878365   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.878371   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.878378   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.878385   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.878467   13764 out.go:177] * Verifying registry addon...
	I0708 19:30:15.879407   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.879439   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.879460   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.879623   13764 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0708 19:30:15.879706   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.879732   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.879750   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.874689   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.874909   13764 main.go:141] libmachine: Making call to close driver server
	W0708 19:30:15.874907   13764 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0708 19:30:15.879803   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.879808   13764 retry.go:31] will retry after 305.52208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0708 19:30:15.876940   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.879857   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.879908   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.880122   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.880136   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.880144   13764 addons.go:475] Verifying addon metrics-server=true in "addons-268316"
	I0708 19:30:15.880699   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.880713   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.880892   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:15.880929   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.880946   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.881749   13764 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-268316 service yakd-dashboard -n yakd-dashboard
	
	I0708 19:30:15.882519   13764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0708 19:30:15.897086   13764 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0708 19:30:15.897105   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:15.937341   13764 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0708 19:30:15.937377   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:15.938383   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.938408   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.938663   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.938680   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.938696   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	W0708 19:30:15.938783   13764 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0708 19:30:15.981122   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:15.981155   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:15.981499   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:15.981518   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:15.983659   13764 pod_ready.go:102] pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:16.185977   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 19:30:16.397755   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:16.397788   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:16.759030   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.368962914s)
	I0708 19:30:16.759108   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:16.759123   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:16.759050   13764 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.563318348s)
	I0708 19:30:16.759400   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:16.759432   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:16.759421   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:16.759461   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:16.759497   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:16.759848   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:16.759862   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:16.759873   13764 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-268316"
	I0708 19:30:16.759892   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:16.761571   13764 out.go:177] * Verifying csi-hostpath-driver addon...
	I0708 19:30:16.761590   13764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0708 19:30:16.763327   13764 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0708 19:30:16.764141   13764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0708 19:30:16.764855   13764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0708 19:30:16.764876   13764 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0708 19:30:16.796977   13764 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0708 19:30:16.797006   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:16.883932   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:16.884626   13764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0708 19:30:16.884643   13764 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0708 19:30:16.890340   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:17.064648   13764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0708 19:30:17.064668   13764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0708 19:30:17.177275   13764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0708 19:30:17.290265   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:17.387867   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:17.389905   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:17.772328   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:17.884561   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:17.890751   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:18.271932   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:18.385175   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:18.387368   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:18.448889   13764 pod_ready.go:102] pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:18.621925   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.435897584s)
	I0708 19:30:18.621983   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:18.622006   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:18.622263   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:18.622305   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:18.622312   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:18.622326   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:18.622334   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:18.622538   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:18.622562   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:18.772197   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:18.894538   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:18.894609   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:19.145512   13764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.968197321s)
	I0708 19:30:19.145556   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:19.145567   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:19.145867   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:19.145911   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:19.145921   13764 main.go:141] libmachine: (addons-268316) DBG | Closing plugin on server side
	I0708 19:30:19.145927   13764 main.go:141] libmachine: Making call to close driver server
	I0708 19:30:19.145949   13764 main.go:141] libmachine: (addons-268316) Calling .Close
	I0708 19:30:19.146184   13764 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:30:19.146239   13764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:30:19.148004   13764 addons.go:475] Verifying addon gcp-auth=true in "addons-268316"
	I0708 19:30:19.149950   13764 out.go:177] * Verifying gcp-auth addon...
	I0708 19:30:19.152398   13764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0708 19:30:19.182156   13764 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0708 19:30:19.182183   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:19.274665   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:19.385436   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:19.391157   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:19.655876   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:19.771898   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:19.885381   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:19.888263   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:19.952566   13764 pod_ready.go:97] pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.231 HostIPs:[{IP:192.168.39
.231}] PodIP: PodIPs:[] StartTime:2024-07-08 19:30:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-08 19:30:11 +0000 UTC,FinishedAt:2024-07-08 19:30:17 +0000 UTC,ContainerID:cri-o://0b33c0f3815deb48a10cce59e4433578640eb5f7f7f542bdfe746620d3c992ae,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://0b33c0f3815deb48a10cce59e4433578640eb5f7f7f542bdfe746620d3c992ae Started:0xc0025429a0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0708 19:30:19.952598   13764 pod_ready.go:81] duration metric: took 10.510498559s for pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace to be "Ready" ...
	E0708 19:30:19.952612   13764 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-29pvb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-08 19:30:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.231 HostIPs:[{IP:192.168.39.231}] PodIP: PodIPs:[] StartTime:2024-07-08 19:30:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-08 19:30:11 +0000 UTC,FinishedAt:2024-07-08 19:30:17 +0000 UTC,ContainerID:cri-o://0b33c0f3815deb48a10cce59e4433578640eb5f7f7f542bdfe746620d3c992ae,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://0b33c0f3815deb48a10cce59e4433578640eb5f7f7f542bdfe746620d3c992ae Started:0xc0025429a0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0708 19:30:19.952621   13764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mdmnx" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.964207   13764 pod_ready.go:92] pod "coredns-7db6d8ff4d-mdmnx" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:19.964229   13764 pod_ready.go:81] duration metric: took 11.599292ms for pod "coredns-7db6d8ff4d-mdmnx" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.964243   13764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.974307   13764 pod_ready.go:92] pod "etcd-addons-268316" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:19.974335   13764 pod_ready.go:81] duration metric: took 10.083616ms for pod "etcd-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.974350   13764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.981161   13764 pod_ready.go:92] pod "kube-apiserver-addons-268316" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:19.981179   13764 pod_ready.go:81] duration metric: took 6.820418ms for pod "kube-apiserver-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.981190   13764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.987268   13764 pod_ready.go:92] pod "kube-controller-manager-addons-268316" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:19.987285   13764 pod_ready.go:81] duration metric: took 6.087748ms for pod "kube-controller-manager-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:19.987296   13764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7plgc" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:20.158147   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:20.270318   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:20.347381   13764 pod_ready.go:92] pod "kube-proxy-7plgc" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:20.347415   13764 pod_ready.go:81] duration metric: took 360.111234ms for pod "kube-proxy-7plgc" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:20.347430   13764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:20.385071   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:20.392739   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:20.657344   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:20.745992   13764 pod_ready.go:92] pod "kube-scheduler-addons-268316" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:20.746024   13764 pod_ready.go:81] duration metric: took 398.58436ms for pod "kube-scheduler-addons-268316" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:20.746037   13764 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-s4n9d" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:20.772660   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:20.884153   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:20.886940   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:21.157466   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:21.269235   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:21.384480   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:21.395529   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:21.656534   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:21.769504   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:21.883565   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:21.886330   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:22.156134   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:22.270016   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:22.384218   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:22.387270   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:22.656642   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:22.753366   13764 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-s4n9d" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:22.769618   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:22.886014   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:22.887815   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:23.158198   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:23.269971   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:23.385720   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:23.387839   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:23.744086   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:23.770775   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:23.884436   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:23.888569   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:24.155773   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:24.270158   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:24.384960   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:24.388616   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:24.655910   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:24.769803   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:24.884135   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:24.888392   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:25.157144   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:25.253339   13764 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-s4n9d" in "kube-system" namespace has status "Ready":"False"
	I0708 19:30:25.273789   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:25.385779   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:25.387229   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:25.656585   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:25.762163   13764 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-s4n9d" in "kube-system" namespace has status "Ready":"True"
	I0708 19:30:25.762184   13764 pod_ready.go:81] duration metric: took 5.016139352s for pod "nvidia-device-plugin-daemonset-s4n9d" in "kube-system" namespace to be "Ready" ...
	I0708 19:30:25.762191   13764 pod_ready.go:38] duration metric: took 16.337024941s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:30:25.762204   13764 api_server.go:52] waiting for apiserver process to appear ...
	I0708 19:30:25.762264   13764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 19:30:25.772495   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:25.781026   13764 api_server.go:72] duration metric: took 19.444844627s to wait for apiserver process to appear ...
	I0708 19:30:25.781051   13764 api_server.go:88] waiting for apiserver healthz status ...
	I0708 19:30:25.781071   13764 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I0708 19:30:25.785956   13764 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I0708 19:30:25.786820   13764 api_server.go:141] control plane version: v1.30.2
	I0708 19:30:25.786850   13764 api_server.go:131] duration metric: took 5.79073ms to wait for apiserver health ...
	I0708 19:30:25.786868   13764 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 19:30:25.796582   13764 system_pods.go:59] 18 kube-system pods found
	I0708 19:30:25.796605   13764 system_pods.go:61] "coredns-7db6d8ff4d-mdmnx" [e8790295-025f-492c-8527-b45580989758] Running
	I0708 19:30:25.796612   13764 system_pods.go:61] "csi-hostpath-attacher-0" [f1542e8d-b696-41e6-8d98-c47563e0d4f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0708 19:30:25.796618   13764 system_pods.go:61] "csi-hostpath-resizer-0" [dce4942c-24f6-4da5-b501-5b8577368aa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0708 19:30:25.796626   13764 system_pods.go:61] "csi-hostpathplugin-wsvcv" [26bd046a-4a16-4a94-aa7e-09f3b7b7c6c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0708 19:30:25.796631   13764 system_pods.go:61] "etcd-addons-268316" [88c9169d-a21e-4479-9e15-38a9161b26ef] Running
	I0708 19:30:25.796635   13764 system_pods.go:61] "kube-apiserver-addons-268316" [be0113de-6c81-41f3-bd33-98d5f4c07b95] Running
	I0708 19:30:25.796639   13764 system_pods.go:61] "kube-controller-manager-addons-268316" [bcc97d95-de10-4126-86cd-0e60ca3ce913] Running
	I0708 19:30:25.796644   13764 system_pods.go:61] "kube-ingress-dns-minikube" [f5f48486-6578-4b7c-ab34-56de96be0694] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0708 19:30:25.796651   13764 system_pods.go:61] "kube-proxy-7plgc" [4dcd9909-5fdf-4a54-a66c-12498b65c28f] Running
	I0708 19:30:25.796655   13764 system_pods.go:61] "kube-scheduler-addons-268316" [12fedcd0-6554-4acf-9293-619280507622] Running
	I0708 19:30:25.796660   13764 system_pods.go:61] "metrics-server-c59844bb4-c6gzl" [fa5607f8-de0f-4bb1-b219-54ef33238b21] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 19:30:25.796666   13764 system_pods.go:61] "nvidia-device-plugin-daemonset-s4n9d" [bd2137b3-9f97-4991-91e6-20ab23e68c75] Running
	I0708 19:30:25.796672   13764 system_pods.go:61] "registry-g8hs8" [36f4018c-5097-47ad-b3e0-a8a225032ab3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0708 19:30:25.796676   13764 system_pods.go:61] "registry-proxy-rrxb2" [ebfad772-c807-408a-81ef-0f5d1ad1b929] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0708 19:30:25.796691   13764 system_pods.go:61] "snapshot-controller-745499f584-s2fn5" [e3414fea-8eee-4787-b9c1-70ada7ae04cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0708 19:30:25.796701   13764 system_pods.go:61] "snapshot-controller-745499f584-skqf6" [7af3eb18-da85-4dce-bbed-84f62a78d232] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0708 19:30:25.796705   13764 system_pods.go:61] "storage-provisioner" [3a22fea0-2e74-4b1d-8943-4009c3bae190] Running
	I0708 19:30:25.796710   13764 system_pods.go:61] "tiller-deploy-6677d64bcd-lmtgw" [785aba76-863a-4bd2-a24f-c7eaa42f49b4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0708 19:30:25.796716   13764 system_pods.go:74] duration metric: took 9.842333ms to wait for pod list to return data ...
	I0708 19:30:25.796726   13764 default_sa.go:34] waiting for default service account to be created ...
	I0708 19:30:25.798545   13764 default_sa.go:45] found service account: "default"
	I0708 19:30:25.798562   13764 default_sa.go:55] duration metric: took 1.83091ms for default service account to be created ...
	I0708 19:30:25.798569   13764 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 19:30:25.808104   13764 system_pods.go:86] 18 kube-system pods found
	I0708 19:30:25.808126   13764 system_pods.go:89] "coredns-7db6d8ff4d-mdmnx" [e8790295-025f-492c-8527-b45580989758] Running
	I0708 19:30:25.808133   13764 system_pods.go:89] "csi-hostpath-attacher-0" [f1542e8d-b696-41e6-8d98-c47563e0d4f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0708 19:30:25.808139   13764 system_pods.go:89] "csi-hostpath-resizer-0" [dce4942c-24f6-4da5-b501-5b8577368aa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0708 19:30:25.808146   13764 system_pods.go:89] "csi-hostpathplugin-wsvcv" [26bd046a-4a16-4a94-aa7e-09f3b7b7c6c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0708 19:30:25.808152   13764 system_pods.go:89] "etcd-addons-268316" [88c9169d-a21e-4479-9e15-38a9161b26ef] Running
	I0708 19:30:25.808157   13764 system_pods.go:89] "kube-apiserver-addons-268316" [be0113de-6c81-41f3-bd33-98d5f4c07b95] Running
	I0708 19:30:25.808164   13764 system_pods.go:89] "kube-controller-manager-addons-268316" [bcc97d95-de10-4126-86cd-0e60ca3ce913] Running
	I0708 19:30:25.808176   13764 system_pods.go:89] "kube-ingress-dns-minikube" [f5f48486-6578-4b7c-ab34-56de96be0694] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0708 19:30:25.808187   13764 system_pods.go:89] "kube-proxy-7plgc" [4dcd9909-5fdf-4a54-a66c-12498b65c28f] Running
	I0708 19:30:25.808194   13764 system_pods.go:89] "kube-scheduler-addons-268316" [12fedcd0-6554-4acf-9293-619280507622] Running
	I0708 19:30:25.808203   13764 system_pods.go:89] "metrics-server-c59844bb4-c6gzl" [fa5607f8-de0f-4bb1-b219-54ef33238b21] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 19:30:25.808210   13764 system_pods.go:89] "nvidia-device-plugin-daemonset-s4n9d" [bd2137b3-9f97-4991-91e6-20ab23e68c75] Running
	I0708 19:30:25.808216   13764 system_pods.go:89] "registry-g8hs8" [36f4018c-5097-47ad-b3e0-a8a225032ab3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0708 19:30:25.808223   13764 system_pods.go:89] "registry-proxy-rrxb2" [ebfad772-c807-408a-81ef-0f5d1ad1b929] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0708 19:30:25.808232   13764 system_pods.go:89] "snapshot-controller-745499f584-s2fn5" [e3414fea-8eee-4787-b9c1-70ada7ae04cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0708 19:30:25.808239   13764 system_pods.go:89] "snapshot-controller-745499f584-skqf6" [7af3eb18-da85-4dce-bbed-84f62a78d232] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0708 19:30:25.808247   13764 system_pods.go:89] "storage-provisioner" [3a22fea0-2e74-4b1d-8943-4009c3bae190] Running
	I0708 19:30:25.808257   13764 system_pods.go:89] "tiller-deploy-6677d64bcd-lmtgw" [785aba76-863a-4bd2-a24f-c7eaa42f49b4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0708 19:30:25.808265   13764 system_pods.go:126] duration metric: took 9.691784ms to wait for k8s-apps to be running ...
	I0708 19:30:25.808273   13764 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 19:30:25.808312   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 19:30:25.823384   13764 system_svc.go:56] duration metric: took 15.100465ms WaitForService to wait for kubelet
	I0708 19:30:25.823420   13764 kubeadm.go:576] duration metric: took 19.487243061s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 19:30:25.823445   13764 node_conditions.go:102] verifying NodePressure condition ...
	I0708 19:30:25.885912   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:25.890599   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:25.947614   13764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:30:25.947639   13764 node_conditions.go:123] node cpu capacity is 2
	I0708 19:30:25.947650   13764 node_conditions.go:105] duration metric: took 124.188481ms to run NodePressure ...
	I0708 19:30:25.947661   13764 start.go:240] waiting for startup goroutines ...
	I0708 19:30:26.156149   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:26.275957   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:26.384290   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:26.389704   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:26.788097   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:26.788457   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:26.884188   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:26.886856   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:27.156131   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:27.271089   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:27.384315   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:27.387703   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:27.656587   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:27.769742   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:27.883721   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:27.887193   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:28.156087   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:28.272646   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:28.383926   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:28.386631   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:28.656100   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:28.773416   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:28.884461   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:28.887854   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:29.156817   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:29.270866   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:29.384528   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:29.388628   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:29.656020   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:29.773644   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:29.884116   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:29.888046   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:30.156031   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:30.270303   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:30.385896   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:30.388477   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:30.657019   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:30.770326   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:30.883645   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:30.887050   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:31.156203   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:31.270773   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:31.383564   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:31.386473   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:31.658903   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:31.776015   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:31.885161   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:31.888341   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:32.156210   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:32.272653   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:32.391893   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:32.399702   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:32.657035   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:32.770419   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:32.885594   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:32.891756   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:33.157655   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:33.273833   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:33.384300   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:33.386872   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:33.656935   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:33.772144   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:33.884376   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:33.887225   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:34.156191   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:34.270151   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:34.384777   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:34.387413   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:34.656689   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:34.769915   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:34.884056   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:34.887639   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:35.157515   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:35.269526   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:35.386142   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:35.400084   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:35.656139   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:35.769959   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:35.884185   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:35.886691   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:36.156190   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:36.269587   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:36.384164   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:36.386784   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:36.656585   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:36.769328   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:36.883655   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:36.885923   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:37.156046   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:37.270042   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:37.384565   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:37.386760   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:37.657700   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:37.769503   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:37.885155   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:37.887730   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:38.156733   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:38.272188   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:38.384709   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:38.389139   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:38.656712   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:38.769636   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:38.883905   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:38.889926   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:39.157006   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:39.270402   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:39.384348   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:39.386702   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:39.658286   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:39.771604   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:39.884257   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:39.887645   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:40.155982   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:40.270073   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:40.384015   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:40.386886   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:40.655786   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:40.770291   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:40.884393   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:40.886894   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:41.157757   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:41.270196   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:41.384873   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:41.388312   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:41.656403   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:41.770726   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:42.298393   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:42.308638   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:42.308965   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:42.309046   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:42.384715   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:42.388715   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:42.658237   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:42.771113   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:42.885055   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:42.887520   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:43.157401   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:43.279630   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:43.390052   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:43.391668   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:43.656093   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:43.769921   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:43.884414   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:43.887300   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:44.156314   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:44.270102   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:44.384235   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:44.387800   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:44.657212   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:44.769895   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:44.884270   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:44.887256   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:45.157123   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:45.270710   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:45.385191   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:45.387645   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:45.658270   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:45.770313   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:45.884394   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:45.887548   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:46.156628   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:46.269687   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:46.385075   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:46.390999   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:46.764732   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:46.772397   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:46.884187   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:46.889679   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:47.156227   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:47.270432   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:47.384627   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:47.388291   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:47.656352   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:47.769330   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:47.884000   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:47.895906   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:48.157217   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:48.270934   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:48.384543   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:48.388060   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 19:30:48.656536   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:48.769979   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:48.884258   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:48.887201   13764 kapi.go:107] duration metric: took 33.004679515s to wait for kubernetes.io/minikube-addons=registry ...
	I0708 19:30:49.156325   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:49.271129   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:49.384708   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:49.656661   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:49.769494   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:49.883797   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:50.156533   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:50.271552   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:50.807904   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:50.812109   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:50.812558   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:50.884405   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:51.157128   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:51.270360   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:51.384468   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:51.656325   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:51.773422   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:51.884854   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:52.158700   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:52.271198   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:52.384958   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:52.656690   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:52.770983   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:52.883776   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:53.155793   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:53.269881   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:53.384453   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:53.656589   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:53.769677   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:53.883783   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:54.156456   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:54.270819   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:54.384556   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:54.656509   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:54.770449   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:54.884691   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:55.157309   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:55.270294   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:55.384024   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:55.752030   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:55.781951   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:55.884359   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:56.156465   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:56.269498   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:56.384205   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:56.656304   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:56.772072   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:57.157085   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:57.157418   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:57.270609   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:57.384803   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:57.656776   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:57.771891   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:57.884421   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:58.155820   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:58.269802   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:58.384744   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:58.656535   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:58.769390   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:58.885632   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:59.158183   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:59.271283   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:59.386444   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:30:59.658882   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:30:59.771120   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:30:59.884118   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:00.156707   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:00.269925   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:00.385287   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:00.656683   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:00.770058   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:00.886731   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:01.156656   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:01.269723   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:01.384905   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:01.655845   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:01.769891   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:01.883172   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:02.156276   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:02.270399   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:02.384817   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:02.726018   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:02.848798   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:02.885509   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:03.161258   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:03.270733   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:03.386929   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:03.658195   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:03.769719   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:03.889449   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:04.155788   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:04.269871   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:04.384425   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:04.656882   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:04.769809   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:04.889445   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:05.156210   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:05.273112   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:05.384168   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:05.656366   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:05.771626   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:05.884488   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:06.156983   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:06.272548   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:06.385551   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:06.656047   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:06.782823   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:06.889006   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:07.155882   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:07.271092   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:07.392574   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:07.657466   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:07.775709   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:07.887627   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:08.159404   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:08.269466   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:08.385824   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:08.656899   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:08.770024   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:08.885480   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:09.157120   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:09.270472   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:09.384852   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:09.662447   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:09.770592   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:09.884990   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:10.156501   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:10.269830   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:10.383964   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:10.656249   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:10.771880   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:10.884668   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:11.157177   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:11.270951   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:11.384360   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:11.657510   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:11.769757   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:11.884547   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:12.156420   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:12.270183   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:12.384981   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:12.657499   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:12.769955   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:12.883559   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:13.155982   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:13.270067   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:13.384794   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:13.656915   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:13.770552   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:13.885551   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:14.157489   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:14.269755   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:14.388648   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:15.000574   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:15.000805   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:15.001595   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:15.157028   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:15.270755   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:15.385812   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:15.656274   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:15.770649   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 19:31:15.884626   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:16.163615   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:16.269642   13764 kapi.go:107] duration metric: took 59.50549752s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0708 19:31:16.384728   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:16.656842   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:16.884601   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:17.156870   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:17.384551   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:17.656680   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:17.884106   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:18.157778   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:18.384108   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:18.656556   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:18.886096   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:19.160835   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:19.384579   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:19.657234   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:19.884491   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:20.156697   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:20.385301   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:20.656426   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:20.885723   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:21.157531   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:21.386726   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:21.656337   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:21.884912   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:22.155507   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:22.384938   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:22.655690   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:22.884207   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:23.156641   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:23.384950   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:23.656840   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:23.884069   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:24.157051   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:24.384731   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:24.656987   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:24.885400   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:25.400809   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:25.404808   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:25.655792   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:25.884064   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:26.156702   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:26.385350   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:26.656501   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:26.885112   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:27.158985   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:27.392981   13764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 19:31:27.670044   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:27.886066   13764 kapi.go:107] duration metric: took 1m12.006440672s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0708 19:31:28.157332   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:28.695753   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:29.158456   13764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 19:31:29.660563   13764 kapi.go:107] duration metric: took 1m10.508162614s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0708 19:31:29.662136   13764 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-268316 cluster.
	I0708 19:31:29.663631   13764 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0708 19:31:29.665306   13764 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0708 19:31:29.666724   13764 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, helm-tiller, storage-provisioner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0708 19:31:29.667996   13764 addons.go:510] duration metric: took 1m23.331896601s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner helm-tiller storage-provisioner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0708 19:31:29.668030   13764 start.go:245] waiting for cluster config update ...
	I0708 19:31:29.668047   13764 start.go:254] writing updated cluster config ...
	I0708 19:31:29.668279   13764 ssh_runner.go:195] Run: rm -f paused
	I0708 19:31:29.721898   13764 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 19:31:29.723824   13764 out.go:177] * Done! kubectl is now configured to use "addons-268316" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.334378454Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720467448334346271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587821,},InodesUsed:&UInt64Value{Value:204,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d91fb9d6-ff67-41b1-8562-517431129109 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.335045663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=266c8d02-3de9-4485-911f-14868357f209 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.335138836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=266c8d02-3de9-4485-911f-14868357f209 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.335436014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f04837ef2d2a591237579fa868e9a2aef2dd5b55f6ca0f9e4216d0f9a5a77cb,PodSandboxId:8b3b8135d419631ff3173aa315156556e130137c4f9028add8ea5b0254fe418a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720467247927762236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-lznqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db22bb68-894a-454b-a1d2-9410d39a9528,},Annotations:map[string]string{io.kubernetes.container.hash: 3b742db2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ac69521142a6e35cd53b6146de1f860720de3a3b9d912255bd3b66a9ef1aa9,PodSandboxId:81bb11f417f17a79ce947d7ce9f7acc952bd3a5e0a0ee55786cd608bca00bdc0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720467109169255473,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5771cdad-38eb-4b69-9d82-5a58ef2c2f4e,},Annotations:map[string]string{io.kubern
etes.container.hash: 92c93ea5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa89db705add5299b6512650662be261a1b54171a36defda0febaa4d76b7719,PodSandboxId:6221ab3e632e79d5d9bc777c45be85aa4398f095df5d16085e097688153d9fc6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720467097497177949,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-cgkpr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61b3fef5-b549-4aab-a5f7-da35eb3d4477,},Annotations:map[string]string{io.kubernetes.container.hash: fd1fb148,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15069a7b3f50f8b733f6b841313e7a8a53493fde2473f0d6937d3d42cdb19b58,PodSandboxId:a475af66e07627f5d7be099005a460014744a7e5e962deff973069a4ddf3ee6b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720467088955878112,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-gtf45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 60da309c-ad4b-4388-aa45-131c4fb0f4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 12f75852,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db2932e06f7be72942ef239e31d1031ce07694c0eb50c48426a91525fc5997b,PodSandboxId:7b106f06e44b15bc52775874e37735172477625277d973d9f8e510aa5a0f5007,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17204
67058658406337,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rf6p2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3ac6741a-bec9-4f29-a6eb-c73c7500970b,},Annotations:map[string]string{io.kubernetes.container.hash: a6c15013,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15a419524494fe4cac639c22abd343bc586a2b8dacee4ba44e05b64a982534b,PodSandboxId:494772db18f3ff4a6eed10b94a087e898e932f0db0dd5abca014a0e933a95851,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1720467052855153288,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-nqm94,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 894303f6-0b3f-451e-8b4c-a1269b70c68f,},Annotations:map[string]string{io.kubernetes.container.hash: b37e3ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d,PodSandboxId:68cc01146add074afc7474a39a65cf3f67d5159accedf923d725dfb2b979aa44,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720467042432339060,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c6gzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5607f8-de0f-4bb1-b219-54ef33238b21,},Annotations:map[string]string{io.kubernetes.container.hash: 5e9677a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0486a262195e25e0cbcb85c7f856a35300a55c800deabb7b3cea1c342fb270,PodSandboxId:cab7dfbbaf216814b4579d7313dd71505a5e81c4b09c6bf1abec9adf853bd02d,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467014260979278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a22fea0-2e74-4b1d-8943-4009c3bae190,},Annotations:map[string]string{io.kubernetes.container.hash: 2544e9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa46f641fce3a59d88fe88837a4c8c08f7e4447206bc8e44e12b9f4f5079abef,PodSandboxId:33f84dfe9bc8103d8a4d8447c3cb88183ca9f280e28de2f92203373b2195c63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Ima
ge:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467010485771198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdmnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8790295-025f-492c-8527-b45580989758,},Annotations:map[string]string{io.kubernetes.container.hash: 933e8636,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fc
1829105fd93b0c9eef5eaf11f30232d42efabb4cb4130c54a76a96ddbd82,PodSandboxId:36132fbfe93b031bbb4a7915d682454b32a82ec148af0e191ae9410b8818414d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467007796499322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7plgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcd9909-5fdf-4a54-a66c-12498b65c28f,},Annotations:map[string]string{io.kubernetes.container.hash: 683e9680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d49be99483f5c15756481dec1f198cbd8e9d
a87539ae5759ec447421c2bf138,PodSandboxId:717561d56daf2914143b08bb1f10bf41c455065ce54ea0b073b734843dc7684e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720466987871363903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c73b77c9e8c067af0478499956a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a92a99f73b4cef445e51d38c9c94905a53d179bb9954413a5a15d3
c7b803b46,PodSandboxId:329650aaf1bc3112b2f246746ca0fdbb0bcf8fde6ea8df7451a3998bfc1a8642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720466987910144817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a0cfbd4519e6880ca99be18bf725eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35d37ebf78b3809e33fc570ccdc8fa7d7a0fd4dcb658545c70675d77960f080,PodSand
boxId:ffe430fd6cb316055ec66677e7e183a3803757ae260eb7eb9ebc754295c738be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720466987873699629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf9d34116c191cb68773ad425a33b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9332ffa119798ff8821e289f7966df0d8310e8c1a67d1304c5ba54479752c9
01,PodSandboxId:3b2f473211f40a9fd72f56007b83165481808f94dd45efcb518930f575189497,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720466987808420964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4feb2225d826d58f607b166f558fd389,},Annotations:map[string]string{io.kubernetes.container.hash: 73820d47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=266c8d02-3de9-4485-911f-14868357f209 name=/runtime.v1.RuntimeService/ListC
ontainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.373497721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c851bcad-7ef8-4e06-bf74-ec3b3ba7447c name=/runtime.v1.RuntimeService/Version
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.373570862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c851bcad-7ef8-4e06-bf74-ec3b3ba7447c name=/runtime.v1.RuntimeService/Version
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.374666716Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13cb0dc7-b9d1-4642-92e7-fb2052a76704 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.376102659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720467448375980406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587821,},InodesUsed:&UInt64Value{Value:204,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13cb0dc7-b9d1-4642-92e7-fb2052a76704 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.376620034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e24d5cf9-1b59-4c4f-ac21-7959c87c0ace name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.376680347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e24d5cf9-1b59-4c4f-ac21-7959c87c0ace name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.376960949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f04837ef2d2a591237579fa868e9a2aef2dd5b55f6ca0f9e4216d0f9a5a77cb,PodSandboxId:8b3b8135d419631ff3173aa315156556e130137c4f9028add8ea5b0254fe418a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720467247927762236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-lznqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db22bb68-894a-454b-a1d2-9410d39a9528,},Annotations:map[string]string{io.kubernetes.container.hash: 3b742db2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ac69521142a6e35cd53b6146de1f860720de3a3b9d912255bd3b66a9ef1aa9,PodSandboxId:81bb11f417f17a79ce947d7ce9f7acc952bd3a5e0a0ee55786cd608bca00bdc0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720467109169255473,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5771cdad-38eb-4b69-9d82-5a58ef2c2f4e,},Annotations:map[string]string{io.kubern
etes.container.hash: 92c93ea5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa89db705add5299b6512650662be261a1b54171a36defda0febaa4d76b7719,PodSandboxId:6221ab3e632e79d5d9bc777c45be85aa4398f095df5d16085e097688153d9fc6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720467097497177949,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-cgkpr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61b3fef5-b549-4aab-a5f7-da35eb3d4477,},Annotations:map[string]string{io.kubernetes.container.hash: fd1fb148,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15069a7b3f50f8b733f6b841313e7a8a53493fde2473f0d6937d3d42cdb19b58,PodSandboxId:a475af66e07627f5d7be099005a460014744a7e5e962deff973069a4ddf3ee6b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720467088955878112,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-gtf45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 60da309c-ad4b-4388-aa45-131c4fb0f4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 12f75852,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db2932e06f7be72942ef239e31d1031ce07694c0eb50c48426a91525fc5997b,PodSandboxId:7b106f06e44b15bc52775874e37735172477625277d973d9f8e510aa5a0f5007,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17204
67058658406337,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rf6p2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3ac6741a-bec9-4f29-a6eb-c73c7500970b,},Annotations:map[string]string{io.kubernetes.container.hash: a6c15013,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15a419524494fe4cac639c22abd343bc586a2b8dacee4ba44e05b64a982534b,PodSandboxId:494772db18f3ff4a6eed10b94a087e898e932f0db0dd5abca014a0e933a95851,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1720467052855153288,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-nqm94,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 894303f6-0b3f-451e-8b4c-a1269b70c68f,},Annotations:map[string]string{io.kubernetes.container.hash: b37e3ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d,PodSandboxId:68cc01146add074afc7474a39a65cf3f67d5159accedf923d725dfb2b979aa44,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720467042432339060,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c6gzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5607f8-de0f-4bb1-b219-54ef33238b21,},Annotations:map[string]string{io.kubernetes.container.hash: 5e9677a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0486a262195e25e0cbcb85c7f856a35300a55c800deabb7b3cea1c342fb270,PodSandboxId:cab7dfbbaf216814b4579d7313dd71505a5e81c4b09c6bf1abec9adf853bd02d,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467014260979278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a22fea0-2e74-4b1d-8943-4009c3bae190,},Annotations:map[string]string{io.kubernetes.container.hash: 2544e9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa46f641fce3a59d88fe88837a4c8c08f7e4447206bc8e44e12b9f4f5079abef,PodSandboxId:33f84dfe9bc8103d8a4d8447c3cb88183ca9f280e28de2f92203373b2195c63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Ima
ge:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467010485771198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdmnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8790295-025f-492c-8527-b45580989758,},Annotations:map[string]string{io.kubernetes.container.hash: 933e8636,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fc
1829105fd93b0c9eef5eaf11f30232d42efabb4cb4130c54a76a96ddbd82,PodSandboxId:36132fbfe93b031bbb4a7915d682454b32a82ec148af0e191ae9410b8818414d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467007796499322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7plgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcd9909-5fdf-4a54-a66c-12498b65c28f,},Annotations:map[string]string{io.kubernetes.container.hash: 683e9680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d49be99483f5c15756481dec1f198cbd8e9d
a87539ae5759ec447421c2bf138,PodSandboxId:717561d56daf2914143b08bb1f10bf41c455065ce54ea0b073b734843dc7684e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720466987871363903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c73b77c9e8c067af0478499956a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a92a99f73b4cef445e51d38c9c94905a53d179bb9954413a5a15d3
c7b803b46,PodSandboxId:329650aaf1bc3112b2f246746ca0fdbb0bcf8fde6ea8df7451a3998bfc1a8642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720466987910144817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a0cfbd4519e6880ca99be18bf725eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35d37ebf78b3809e33fc570ccdc8fa7d7a0fd4dcb658545c70675d77960f080,PodSand
boxId:ffe430fd6cb316055ec66677e7e183a3803757ae260eb7eb9ebc754295c738be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720466987873699629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf9d34116c191cb68773ad425a33b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9332ffa119798ff8821e289f7966df0d8310e8c1a67d1304c5ba54479752c9
01,PodSandboxId:3b2f473211f40a9fd72f56007b83165481808f94dd45efcb518930f575189497,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720466987808420964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4feb2225d826d58f607b166f558fd389,},Annotations:map[string]string{io.kubernetes.container.hash: 73820d47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e24d5cf9-1b59-4c4f-ac21-7959c87c0ace name=/runtime.v1.RuntimeService/ListC
ontainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.411432828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c17679cc-0767-45e9-b7a0-119182432431 name=/runtime.v1.RuntimeService/Version
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.411508115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c17679cc-0767-45e9-b7a0-119182432431 name=/runtime.v1.RuntimeService/Version
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.412572457Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82cecd62-87bf-4d0b-8e41-671a62dcbf61 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.413879091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720467448413849211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587821,},InodesUsed:&UInt64Value{Value:204,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82cecd62-87bf-4d0b-8e41-671a62dcbf61 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.414525947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c958b4d0-2587-49c0-a944-2c7698caa6f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.414583810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c958b4d0-2587-49c0-a944-2c7698caa6f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.414896428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f04837ef2d2a591237579fa868e9a2aef2dd5b55f6ca0f9e4216d0f9a5a77cb,PodSandboxId:8b3b8135d419631ff3173aa315156556e130137c4f9028add8ea5b0254fe418a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720467247927762236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-lznqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db22bb68-894a-454b-a1d2-9410d39a9528,},Annotations:map[string]string{io.kubernetes.container.hash: 3b742db2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ac69521142a6e35cd53b6146de1f860720de3a3b9d912255bd3b66a9ef1aa9,PodSandboxId:81bb11f417f17a79ce947d7ce9f7acc952bd3a5e0a0ee55786cd608bca00bdc0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720467109169255473,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5771cdad-38eb-4b69-9d82-5a58ef2c2f4e,},Annotations:map[string]string{io.kubern
etes.container.hash: 92c93ea5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa89db705add5299b6512650662be261a1b54171a36defda0febaa4d76b7719,PodSandboxId:6221ab3e632e79d5d9bc777c45be85aa4398f095df5d16085e097688153d9fc6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720467097497177949,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-cgkpr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61b3fef5-b549-4aab-a5f7-da35eb3d4477,},Annotations:map[string]string{io.kubernetes.container.hash: fd1fb148,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15069a7b3f50f8b733f6b841313e7a8a53493fde2473f0d6937d3d42cdb19b58,PodSandboxId:a475af66e07627f5d7be099005a460014744a7e5e962deff973069a4ddf3ee6b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720467088955878112,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-gtf45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 60da309c-ad4b-4388-aa45-131c4fb0f4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 12f75852,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db2932e06f7be72942ef239e31d1031ce07694c0eb50c48426a91525fc5997b,PodSandboxId:7b106f06e44b15bc52775874e37735172477625277d973d9f8e510aa5a0f5007,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17204
67058658406337,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rf6p2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3ac6741a-bec9-4f29-a6eb-c73c7500970b,},Annotations:map[string]string{io.kubernetes.container.hash: a6c15013,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15a419524494fe4cac639c22abd343bc586a2b8dacee4ba44e05b64a982534b,PodSandboxId:494772db18f3ff4a6eed10b94a087e898e932f0db0dd5abca014a0e933a95851,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1720467052855153288,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-nqm94,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 894303f6-0b3f-451e-8b4c-a1269b70c68f,},Annotations:map[string]string{io.kubernetes.container.hash: b37e3ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d,PodSandboxId:68cc01146add074afc7474a39a65cf3f67d5159accedf923d725dfb2b979aa44,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720467042432339060,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c6gzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5607f8-de0f-4bb1-b219-54ef33238b21,},Annotations:map[string]string{io.kubernetes.container.hash: 5e9677a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0486a262195e25e0cbcb85c7f856a35300a55c800deabb7b3cea1c342fb270,PodSandboxId:cab7dfbbaf216814b4579d7313dd71505a5e81c4b09c6bf1abec9adf853bd02d,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467014260979278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a22fea0-2e74-4b1d-8943-4009c3bae190,},Annotations:map[string]string{io.kubernetes.container.hash: 2544e9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa46f641fce3a59d88fe88837a4c8c08f7e4447206bc8e44e12b9f4f5079abef,PodSandboxId:33f84dfe9bc8103d8a4d8447c3cb88183ca9f280e28de2f92203373b2195c63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Ima
ge:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467010485771198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdmnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8790295-025f-492c-8527-b45580989758,},Annotations:map[string]string{io.kubernetes.container.hash: 933e8636,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fc
1829105fd93b0c9eef5eaf11f30232d42efabb4cb4130c54a76a96ddbd82,PodSandboxId:36132fbfe93b031bbb4a7915d682454b32a82ec148af0e191ae9410b8818414d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467007796499322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7plgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcd9909-5fdf-4a54-a66c-12498b65c28f,},Annotations:map[string]string{io.kubernetes.container.hash: 683e9680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d49be99483f5c15756481dec1f198cbd8e9d
a87539ae5759ec447421c2bf138,PodSandboxId:717561d56daf2914143b08bb1f10bf41c455065ce54ea0b073b734843dc7684e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720466987871363903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c73b77c9e8c067af0478499956a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a92a99f73b4cef445e51d38c9c94905a53d179bb9954413a5a15d3
c7b803b46,PodSandboxId:329650aaf1bc3112b2f246746ca0fdbb0bcf8fde6ea8df7451a3998bfc1a8642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720466987910144817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a0cfbd4519e6880ca99be18bf725eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35d37ebf78b3809e33fc570ccdc8fa7d7a0fd4dcb658545c70675d77960f080,PodSand
boxId:ffe430fd6cb316055ec66677e7e183a3803757ae260eb7eb9ebc754295c738be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720466987873699629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf9d34116c191cb68773ad425a33b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9332ffa119798ff8821e289f7966df0d8310e8c1a67d1304c5ba54479752c9
01,PodSandboxId:3b2f473211f40a9fd72f56007b83165481808f94dd45efcb518930f575189497,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720466987808420964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4feb2225d826d58f607b166f558fd389,},Annotations:map[string]string{io.kubernetes.container.hash: 73820d47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c958b4d0-2587-49c0-a944-2c7698caa6f8 name=/runtime.v1.RuntimeService/ListC
ontainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.453635599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=525ed1b8-3033-4abd-84a8-856983a2d93c name=/runtime.v1.RuntimeService/Version
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.453710523Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=525ed1b8-3033-4abd-84a8-856983a2d93c name=/runtime.v1.RuntimeService/Version
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.455168486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ca5c511-987b-4db1-a8e5-ae1b6e765b48 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.456511459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720467448456485038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587821,},InodesUsed:&UInt64Value{Value:204,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ca5c511-987b-4db1-a8e5-ae1b6e765b48 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.457154503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=832800aa-cceb-40af-af23-952e561076cb name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.457210971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=832800aa-cceb-40af-af23-952e561076cb name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:37:28 addons-268316 crio[685]: time="2024-07-08 19:37:28.457524027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f04837ef2d2a591237579fa868e9a2aef2dd5b55f6ca0f9e4216d0f9a5a77cb,PodSandboxId:8b3b8135d419631ff3173aa315156556e130137c4f9028add8ea5b0254fe418a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1720467247927762236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-lznqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db22bb68-894a-454b-a1d2-9410d39a9528,},Annotations:map[string]string{io.kubernetes.container.hash: 3b742db2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ac69521142a6e35cd53b6146de1f860720de3a3b9d912255bd3b66a9ef1aa9,PodSandboxId:81bb11f417f17a79ce947d7ce9f7acc952bd3a5e0a0ee55786cd608bca00bdc0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1720467109169255473,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5771cdad-38eb-4b69-9d82-5a58ef2c2f4e,},Annotations:map[string]string{io.kubern
etes.container.hash: 92c93ea5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa89db705add5299b6512650662be261a1b54171a36defda0febaa4d76b7719,PodSandboxId:6221ab3e632e79d5d9bc777c45be85aa4398f095df5d16085e097688153d9fc6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1720467097497177949,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-cgkpr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 61b3fef5-b549-4aab-a5f7-da35eb3d4477,},Annotations:map[string]string{io.kubernetes.container.hash: fd1fb148,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15069a7b3f50f8b733f6b841313e7a8a53493fde2473f0d6937d3d42cdb19b58,PodSandboxId:a475af66e07627f5d7be099005a460014744a7e5e962deff973069a4ddf3ee6b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1720467088955878112,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-gtf45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 60da309c-ad4b-4388-aa45-131c4fb0f4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 12f75852,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db2932e06f7be72942ef239e31d1031ce07694c0eb50c48426a91525fc5997b,PodSandboxId:7b106f06e44b15bc52775874e37735172477625277d973d9f8e510aa5a0f5007,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17204
67058658406337,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rf6p2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3ac6741a-bec9-4f29-a6eb-c73c7500970b,},Annotations:map[string]string{io.kubernetes.container.hash: a6c15013,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15a419524494fe4cac639c22abd343bc586a2b8dacee4ba44e05b64a982534b,PodSandboxId:494772db18f3ff4a6eed10b94a087e898e932f0db0dd5abca014a0e933a95851,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1720467052855153288,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-nqm94,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 894303f6-0b3f-451e-8b4c-a1269b70c68f,},Annotations:map[string]string{io.kubernetes.container.hash: b37e3ea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d,PodSandboxId:68cc01146add074afc7474a39a65cf3f67d5159accedf923d725dfb2b979aa44,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1720467042432339060,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-c6gzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5607f8-de0f-4bb1-b219-54ef33238b21,},Annotations:map[string]string{io.kubernetes.container.hash: 5e9677a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0486a262195e25e0cbcb85c7f856a35300a55c800deabb7b3cea1c342fb270,PodSandboxId:cab7dfbbaf216814b4579d7313dd71505a5e81c4b09c6bf1abec9adf853bd02d,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467014260979278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a22fea0-2e74-4b1d-8943-4009c3bae190,},Annotations:map[string]string{io.kubernetes.container.hash: 2544e9c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa46f641fce3a59d88fe88837a4c8c08f7e4447206bc8e44e12b9f4f5079abef,PodSandboxId:33f84dfe9bc8103d8a4d8447c3cb88183ca9f280e28de2f92203373b2195c63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Ima
ge:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467010485771198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mdmnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8790295-025f-492c-8527-b45580989758,},Annotations:map[string]string{io.kubernetes.container.hash: 933e8636,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fc
1829105fd93b0c9eef5eaf11f30232d42efabb4cb4130c54a76a96ddbd82,PodSandboxId:36132fbfe93b031bbb4a7915d682454b32a82ec148af0e191ae9410b8818414d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467007796499322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7plgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dcd9909-5fdf-4a54-a66c-12498b65c28f,},Annotations:map[string]string{io.kubernetes.container.hash: 683e9680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d49be99483f5c15756481dec1f198cbd8e9d
a87539ae5759ec447421c2bf138,PodSandboxId:717561d56daf2914143b08bb1f10bf41c455065ce54ea0b073b734843dc7684e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720466987871363903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c73b77c9e8c067af0478499956a5c7,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a92a99f73b4cef445e51d38c9c94905a53d179bb9954413a5a15d3
c7b803b46,PodSandboxId:329650aaf1bc3112b2f246746ca0fdbb0bcf8fde6ea8df7451a3998bfc1a8642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720466987910144817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a0cfbd4519e6880ca99be18bf725eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 9e0468f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35d37ebf78b3809e33fc570ccdc8fa7d7a0fd4dcb658545c70675d77960f080,PodSand
boxId:ffe430fd6cb316055ec66677e7e183a3803757ae260eb7eb9ebc754295c738be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720466987873699629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf9d34116c191cb68773ad425a33b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9332ffa119798ff8821e289f7966df0d8310e8c1a67d1304c5ba54479752c9
01,PodSandboxId:3b2f473211f40a9fd72f56007b83165481808f94dd45efcb518930f575189497,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720466987808420964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-268316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4feb2225d826d58f607b166f558fd389,},Annotations:map[string]string{io.kubernetes.container.hash: 73820d47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=832800aa-cceb-40af-af23-952e561076cb name=/runtime.v1.RuntimeService/ListC
ontainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f04837ef2d2a       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 3 minutes ago       Running             hello-world-app           0                   8b3b8135d4196       hello-world-app-86c47465fc-lznqj
	35ac69521142a       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   81bb11f417f17       nginx
	8fa89db705add       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   6221ab3e632e7       headlamp-7867546754-cgkpr
	15069a7b3f50f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   a475af66e0762       gcp-auth-5db96cd9b4-gtf45
	1db2932e06f7b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         6 minutes ago       Running             yakd                      0                   7b106f06e44b1       yakd-dashboard-799879c74f-rf6p2
	a15a419524494       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   494772db18f3f       local-path-provisioner-8d985888d-nqm94
	15a517fd0d065       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   68cc01146add0       metrics-server-c59844bb4-c6gzl
	0e0486a262195       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   cab7dfbbaf216       storage-provisioner
	aa46f641fce3a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   33f84dfe9bc81       coredns-7db6d8ff4d-mdmnx
	49fc1829105fd       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                        7 minutes ago       Running             kube-proxy                0                   36132fbfe93b0       kube-proxy-7plgc
	1a92a99f73b4c       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                        7 minutes ago       Running             kube-apiserver            0                   329650aaf1bc3       kube-apiserver-addons-268316
	e35d37ebf78b3       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                        7 minutes ago       Running             kube-controller-manager   0                   ffe430fd6cb31       kube-controller-manager-addons-268316
	9d49be99483f5       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                        7 minutes ago       Running             kube-scheduler            0                   717561d56daf2       kube-scheduler-addons-268316
	9332ffa119798       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   3b2f473211f40       etcd-addons-268316
	
	
	==> coredns [aa46f641fce3a59d88fe88837a4c8c08f7e4447206bc8e44e12b9f4f5079abef] <==
	[INFO] 10.244.0.8:57008 - 20929 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000135063s
	[INFO] 10.244.0.8:35379 - 49666 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000136303s
	[INFO] 10.244.0.8:35379 - 13313 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000063072s
	[INFO] 10.244.0.8:44352 - 48968 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081466s
	[INFO] 10.244.0.8:44352 - 10574 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102635s
	[INFO] 10.244.0.8:47113 - 57758 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110145s
	[INFO] 10.244.0.8:47113 - 62864 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006031s
	[INFO] 10.244.0.8:46632 - 8198 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097314s
	[INFO] 10.244.0.8:46632 - 64773 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000059706s
	[INFO] 10.244.0.8:57492 - 57986 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055222s
	[INFO] 10.244.0.8:57492 - 50308 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070743s
	[INFO] 10.244.0.8:58094 - 3548 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048957s
	[INFO] 10.244.0.8:58094 - 29150 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083368s
	[INFO] 10.244.0.8:35355 - 6749 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000052974s
	[INFO] 10.244.0.8:35355 - 34399 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156485s
	[INFO] 10.244.0.22:43752 - 4695 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000496387s
	[INFO] 10.244.0.22:57415 - 5936 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000076941s
	[INFO] 10.244.0.22:54805 - 5906 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103352s
	[INFO] 10.244.0.22:34679 - 17723 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079246s
	[INFO] 10.244.0.22:44111 - 3213 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105983s
	[INFO] 10.244.0.22:42665 - 34122 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063248s
	[INFO] 10.244.0.22:38716 - 2315 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000731728s
	[INFO] 10.244.0.22:54782 - 52403 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.00044554s
	[INFO] 10.244.0.25:55033 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000236273s
	[INFO] 10.244.0.25:36867 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116024s
	
	
	==> describe nodes <==
	Name:               addons-268316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-268316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=addons-268316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T19_29_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-268316
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:29:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-268316
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:37:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:34:29 +0000   Mon, 08 Jul 2024 19:29:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:34:29 +0000   Mon, 08 Jul 2024 19:29:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:34:29 +0000   Mon, 08 Jul 2024 19:29:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:34:29 +0000   Mon, 08 Jul 2024 19:29:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    addons-268316
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 12ddb492c0af4611b4c2501c2b7881af
	  System UUID:                12ddb492-c0af-4611-b4c2-501c2b7881af
	  Boot ID:                    8b0b105f-947c-4a97-ba70-9386535e08a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-lznqj          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  gcp-auth                    gcp-auth-5db96cd9b4-gtf45                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  headlamp                    headlamp-7867546754-cgkpr                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 coredns-7db6d8ff4d-mdmnx                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m22s
	  kube-system                 etcd-addons-268316                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-apiserver-addons-268316              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-controller-manager-addons-268316     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-proxy-7plgc                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-scheduler-addons-268316              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 metrics-server-c59844bb4-c6gzl            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m16s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  local-path-storage          local-path-provisioner-8d985888d-nqm94    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  yakd-dashboard              yakd-dashboard-799879c74f-rf6p2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m19s  kube-proxy       
	  Normal  Starting                 7m36s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m36s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m36s  kubelet          Node addons-268316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s  kubelet          Node addons-268316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s  kubelet          Node addons-268316 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m34s  kubelet          Node addons-268316 status is now: NodeReady
	  Normal  RegisteredNode           7m23s  node-controller  Node addons-268316 event: Registered Node addons-268316 in Controller
	
	
	==> dmesg <==
	[  +0.087540] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.287343] kauditd_printk_skb: 18 callbacks suppressed
	[Jul 8 19:30] systemd-fstab-generator[1486]: Ignoring "noauto" option for root device
	[  +5.169700] kauditd_printk_skb: 103 callbacks suppressed
	[  +5.035103] kauditd_printk_skb: 125 callbacks suppressed
	[  +8.695411] kauditd_printk_skb: 98 callbacks suppressed
	[ +17.090744] kauditd_printk_skb: 8 callbacks suppressed
	[ +10.289816] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.945770] kauditd_printk_skb: 9 callbacks suppressed
	[Jul 8 19:31] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.238576] kauditd_printk_skb: 52 callbacks suppressed
	[  +6.038021] kauditd_printk_skb: 24 callbacks suppressed
	[ +10.547510] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.448399] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.294649] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.218769] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.587815] kauditd_printk_skb: 39 callbacks suppressed
	[Jul 8 19:32] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.799676] kauditd_printk_skb: 29 callbacks suppressed
	[ +14.704295] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.557562] kauditd_printk_skb: 7 callbacks suppressed
	[ +23.123568] kauditd_printk_skb: 7 callbacks suppressed
	[Jul 8 19:33] kauditd_printk_skb: 33 callbacks suppressed
	[Jul 8 19:34] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.897966] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9332ffa119798ff8821e289f7966df0d8310e8c1a67d1304c5ba54479752c901] <==
	{"level":"info","ts":"2024-07-08T19:31:14.981709Z","caller":"traceutil/trace.go:171","msg":"trace[1421868555] linearizableReadLoop","detail":"{readStateIndex:1116; appliedIndex:1115; }","duration":"340.425348ms","start":"2024-07-08T19:31:14.641269Z","end":"2024-07-08T19:31:14.981695Z","steps":["trace[1421868555] 'read index received'  (duration: 340.253266ms)","trace[1421868555] 'applied index is now lower than readState.Index'  (duration: 171.626µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T19:31:14.981883Z","caller":"traceutil/trace.go:171","msg":"trace[2046605009] transaction","detail":"{read_only:false; response_revision:1086; number_of_response:1; }","duration":"435.496297ms","start":"2024-07-08T19:31:14.546378Z","end":"2024-07-08T19:31:14.981875Z","steps":["trace[2046605009] 'process raft request'  (duration: 435.183712ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:14.981959Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-08T19:31:14.546361Z","time spent":"435.539697ms","remote":"127.0.0.1:33070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-268316\" mod_revision:1011 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-268316\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-268316\" > >"}
	{"level":"warn","ts":"2024-07-08T19:31:14.982048Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.795744ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-07-08T19:31:14.982092Z","caller":"traceutil/trace.go:171","msg":"trace[718086904] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1086; }","duration":"232.919802ms","start":"2024-07-08T19:31:14.749161Z","end":"2024-07-08T19:31:14.982081Z","steps":["trace[718086904] 'agreement among raft nodes before linearized reading'  (duration: 232.758903ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:14.982202Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.932826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-08T19:31:14.982224Z","caller":"traceutil/trace.go:171","msg":"trace[284475579] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1086; }","duration":"340.977006ms","start":"2024-07-08T19:31:14.641242Z","end":"2024-07-08T19:31:14.982219Z","steps":["trace[284475579] 'agreement among raft nodes before linearized reading'  (duration: 340.905087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:14.982237Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-08T19:31:14.641228Z","time spent":"341.004414ms","remote":"127.0.0.1:32972","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11475,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-08T19:31:14.982311Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.668274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-08T19:31:14.982334Z","caller":"traceutil/trace.go:171","msg":"trace[2023217837] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1086; }","duration":"113.71307ms","start":"2024-07-08T19:31:14.868614Z","end":"2024-07-08T19:31:14.982327Z","steps":["trace[2023217837] 'agreement among raft nodes before linearized reading'  (duration: 113.634042ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:14.982435Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.214792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85554"}
	{"level":"info","ts":"2024-07-08T19:31:14.982453Z","caller":"traceutil/trace.go:171","msg":"trace[1292665772] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1086; }","duration":"230.255755ms","start":"2024-07-08T19:31:14.752191Z","end":"2024-07-08T19:31:14.982447Z","steps":["trace[1292665772] 'agreement among raft nodes before linearized reading'  (duration: 230.125276ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:31:25.323397Z","caller":"traceutil/trace.go:171","msg":"trace[352211193] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"309.119307ms","start":"2024-07-08T19:31:25.014261Z","end":"2024-07-08T19:31:25.323381Z","steps":["trace[352211193] 'process raft request'  (duration: 309.020066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:25.323637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.621624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-08T19:31:25.323397Z","caller":"traceutil/trace.go:171","msg":"trace[1487299839] linearizableReadLoop","detail":"{readStateIndex:1143; appliedIndex:1143; }","duration":"224.525358ms","start":"2024-07-08T19:31:25.098854Z","end":"2024-07-08T19:31:25.323379Z","steps":["trace[1487299839] 'read index received'  (duration: 224.518818ms)","trace[1487299839] 'applied index is now lower than readState.Index'  (duration: 5.684µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T19:31:25.3237Z","caller":"traceutil/trace.go:171","msg":"trace[199844631] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:1111; }","duration":"224.884035ms","start":"2024-07-08T19:31:25.098808Z","end":"2024-07-08T19:31:25.323692Z","steps":["trace[199844631] 'agreement among raft nodes before linearized reading'  (duration: 224.593635ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:25.323665Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-08T19:31:25.014246Z","time spent":"309.362174ms","remote":"127.0.0.1:33070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1102 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-07-08T19:31:25.381409Z","caller":"traceutil/trace.go:171","msg":"trace[1021177162] transaction","detail":"{read_only:false; response_revision:1112; number_of_response:1; }","duration":"176.682468ms","start":"2024-07-08T19:31:25.204704Z","end":"2024-07-08T19:31:25.381387Z","steps":["trace[1021177162] 'process raft request'  (duration: 174.153029ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:25.383704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.04023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-08T19:31:25.383843Z","caller":"traceutil/trace.go:171","msg":"trace[306522552] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1112; }","duration":"196.910025ms","start":"2024-07-08T19:31:25.186918Z","end":"2024-07-08T19:31:25.383828Z","steps":["trace[306522552] 'agreement among raft nodes before linearized reading'  (duration: 195.046097ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:31:25.384606Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.374772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-08T19:31:25.38473Z","caller":"traceutil/trace.go:171","msg":"trace[857643364] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1112; }","duration":"244.529124ms","start":"2024-07-08T19:31:25.140192Z","end":"2024-07-08T19:31:25.384721Z","steps":["trace[857643364] 'agreement among raft nodes before linearized reading'  (duration: 244.327416ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:32:21.399196Z","caller":"traceutil/trace.go:171","msg":"trace[454135626] transaction","detail":"{read_only:false; response_revision:1478; number_of_response:1; }","duration":"280.584913ms","start":"2024-07-08T19:32:21.118586Z","end":"2024-07-08T19:32:21.399171Z","steps":["trace[454135626] 'process raft request'  (duration: 280.166763ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:32:26.547334Z","caller":"traceutil/trace.go:171","msg":"trace[307156087] transaction","detail":"{read_only:false; response_revision:1504; number_of_response:1; }","duration":"131.109067ms","start":"2024-07-08T19:32:26.416197Z","end":"2024-07-08T19:32:26.547306Z","steps":["trace[307156087] 'process raft request'  (duration: 130.722018ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:32:51.969815Z","caller":"traceutil/trace.go:171","msg":"trace[1658838365] transaction","detail":"{read_only:false; response_revision:1588; number_of_response:1; }","duration":"124.393838ms","start":"2024-07-08T19:32:51.845394Z","end":"2024-07-08T19:32:51.969788Z","steps":["trace[1658838365] 'process raft request'  (duration: 124.272508ms)"],"step_count":1}
	
	
	==> gcp-auth [15069a7b3f50f8b733f6b841313e7a8a53493fde2473f0d6937d3d42cdb19b58] <==
	2024/07/08 19:31:29 GCP Auth Webhook started!
	2024/07/08 19:31:30 Ready to marshal response ...
	2024/07/08 19:31:30 Ready to write response ...
	2024/07/08 19:31:30 Ready to marshal response ...
	2024/07/08 19:31:30 Ready to write response ...
	2024/07/08 19:31:30 Ready to marshal response ...
	2024/07/08 19:31:30 Ready to write response ...
	2024/07/08 19:31:34 Ready to marshal response ...
	2024/07/08 19:31:34 Ready to write response ...
	2024/07/08 19:31:40 Ready to marshal response ...
	2024/07/08 19:31:40 Ready to write response ...
	2024/07/08 19:31:46 Ready to marshal response ...
	2024/07/08 19:31:46 Ready to write response ...
	2024/07/08 19:31:54 Ready to marshal response ...
	2024/07/08 19:31:54 Ready to write response ...
	2024/07/08 19:31:55 Ready to marshal response ...
	2024/07/08 19:31:55 Ready to write response ...
	2024/07/08 19:32:04 Ready to marshal response ...
	2024/07/08 19:32:04 Ready to write response ...
	2024/07/08 19:32:14 Ready to marshal response ...
	2024/07/08 19:32:14 Ready to write response ...
	2024/07/08 19:32:44 Ready to marshal response ...
	2024/07/08 19:32:44 Ready to write response ...
	2024/07/08 19:34:05 Ready to marshal response ...
	2024/07/08 19:34:05 Ready to write response ...
	
	
	==> kernel <==
	 19:37:28 up 8 min,  0 users,  load average: 1.59, 0.97, 0.61
	Linux addons-268316 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1a92a99f73b4cef445e51d38c9c94905a53d179bb9954413a5a15d3c7b803b46] <==
	W0708 19:31:43.721726       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 19:31:43.722367       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0708 19:31:43.722563       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.226.252:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.226.252:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.226.252:443: connect: connection refused
	E0708 19:31:43.727439       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.226.252:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.226.252:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.226.252:443: connect: connection refused
	I0708 19:31:43.793162       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0708 19:31:45.999954       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0708 19:31:46.222428       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.241.186"}
	I0708 19:31:49.407659       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0708 19:31:50.436686       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0708 19:32:28.134560       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0708 19:33:00.747388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0708 19:33:00.752313       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0708 19:33:00.784661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0708 19:33:00.784727       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0708 19:33:00.801635       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0708 19:33:00.801768       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0708 19:33:00.831472       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0708 19:33:00.832347       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0708 19:33:00.875255       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0708 19:33:00.875816       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0708 19:33:01.801878       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0708 19:33:01.875961       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0708 19:33:01.887224       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0708 19:34:05.370378       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.111.187"}
	
	
	==> kube-controller-manager [e35d37ebf78b3809e33fc570ccdc8fa7d7a0fd4dcb658545c70675d77960f080] <==
	W0708 19:35:15.500245       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:35:15.500316       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:35:19.632641       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:35:19.632756       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:35:27.917569       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:35:27.917626       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:35:53.194952       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:35:53.195059       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:36:03.989082       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:36:03.989245       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:36:04.373072       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:36:04.373127       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:36:09.108604       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:36:09.108708       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:36:36.295815       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:36:36.295899       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:36:43.490822       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:36:43.490866       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:36:46.404178       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:36:46.404271       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:36:57.609123       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:36:57.609235       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0708 19:37:25.742791       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 19:37:25.742907       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0708 19:37:27.372932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="15.671µs"
	
	
	==> kube-proxy [49fc1829105fd93b0c9eef5eaf11f30232d42efabb4cb4130c54a76a96ddbd82] <==
	I0708 19:30:08.969042       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:30:08.994726       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.231"]
	I0708 19:30:09.087327       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:30:09.087383       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:30:09.087400       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:30:09.090362       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:30:09.090566       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:30:09.090601       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:30:09.092375       1 config.go:192] "Starting service config controller"
	I0708 19:30:09.092384       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:30:09.092406       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:30:09.092409       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:30:09.092937       1 config.go:319] "Starting node config controller"
	I0708 19:30:09.092944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:30:09.192624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:30:09.192661       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:30:09.193345       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9d49be99483f5c15756481dec1f198cbd8e9da87539ae5759ec447421c2bf138] <==
	W0708 19:29:50.557423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 19:29:50.563974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 19:29:50.557606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 19:29:50.564112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 19:29:51.370285       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 19:29:51.370382       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 19:29:51.388758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 19:29:51.388856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 19:29:51.417972       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:29:51.418070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 19:29:51.420504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 19:29:51.420525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 19:29:51.437075       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 19:29:51.437182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0708 19:29:51.446590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 19:29:51.446617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 19:29:51.474313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 19:29:51.474377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 19:29:51.477182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0708 19:29:51.477232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0708 19:29:51.488557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 19:29:51.488603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 19:29:51.626724       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 19:29:51.626809       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0708 19:29:53.841586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 19:34:52 addons-268316 kubelet[1276]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:34:52 addons-268316 kubelet[1276]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:34:53 addons-268316 kubelet[1276]: I0708 19:34:53.511191    1276 scope.go:117] "RemoveContainer" containerID="35c1dd586ce67e6238a1cfaffc3490bd72a604cdd37589b6fc143c48bbe669bb"
	Jul 08 19:34:53 addons-268316 kubelet[1276]: I0708 19:34:53.533432    1276 scope.go:117] "RemoveContainer" containerID="648cabbd1c23d6e1cb4e2fe82a58559d0355fd3fc4814fb0faab0e47b04c08a6"
	Jul 08 19:35:52 addons-268316 kubelet[1276]: E0708 19:35:52.933690    1276 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:35:52 addons-268316 kubelet[1276]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:35:52 addons-268316 kubelet[1276]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:35:52 addons-268316 kubelet[1276]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:35:52 addons-268316 kubelet[1276]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:36:52 addons-268316 kubelet[1276]: E0708 19:36:52.934990    1276 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:36:52 addons-268316 kubelet[1276]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:36:52 addons-268316 kubelet[1276]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:36:52 addons-268316 kubelet[1276]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:36:52 addons-268316 kubelet[1276]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:37:27 addons-268316 kubelet[1276]: I0708 19:37:27.415447    1276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-lznqj" podStartSLOduration=200.309604001 podStartE2EDuration="3m22.415386749s" podCreationTimestamp="2024-07-08 19:34:05 +0000 UTC" firstStartedPulling="2024-07-08 19:34:05.803830296 +0000 UTC m=+253.050846581" lastFinishedPulling="2024-07-08 19:34:07.909613052 +0000 UTC m=+255.156629329" observedRunningTime="2024-07-08 19:34:08.255954844 +0000 UTC m=+255.502971137" watchObservedRunningTime="2024-07-08 19:37:27.415386749 +0000 UTC m=+454.662403034"
	Jul 08 19:37:28 addons-268316 kubelet[1276]: I0708 19:37:28.919549    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fa5607f8-de0f-4bb1-b219-54ef33238b21-tmp-dir\") pod \"fa5607f8-de0f-4bb1-b219-54ef33238b21\" (UID: \"fa5607f8-de0f-4bb1-b219-54ef33238b21\") "
	Jul 08 19:37:28 addons-268316 kubelet[1276]: I0708 19:37:28.919780    1276 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgm9q\" (UniqueName: \"kubernetes.io/projected/fa5607f8-de0f-4bb1-b219-54ef33238b21-kube-api-access-qgm9q\") pod \"fa5607f8-de0f-4bb1-b219-54ef33238b21\" (UID: \"fa5607f8-de0f-4bb1-b219-54ef33238b21\") "
	Jul 08 19:37:28 addons-268316 kubelet[1276]: I0708 19:37:28.921280    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5607f8-de0f-4bb1-b219-54ef33238b21-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "fa5607f8-de0f-4bb1-b219-54ef33238b21" (UID: "fa5607f8-de0f-4bb1-b219-54ef33238b21"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 08 19:37:28 addons-268316 kubelet[1276]: I0708 19:37:28.929713    1276 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5607f8-de0f-4bb1-b219-54ef33238b21-kube-api-access-qgm9q" (OuterVolumeSpecName: "kube-api-access-qgm9q") pod "fa5607f8-de0f-4bb1-b219-54ef33238b21" (UID: "fa5607f8-de0f-4bb1-b219-54ef33238b21"). InnerVolumeSpecName "kube-api-access-qgm9q". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 08 19:37:29 addons-268316 kubelet[1276]: I0708 19:37:29.020745    1276 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fa5607f8-de0f-4bb1-b219-54ef33238b21-tmp-dir\") on node \"addons-268316\" DevicePath \"\""
	Jul 08 19:37:29 addons-268316 kubelet[1276]: I0708 19:37:29.020774    1276 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qgm9q\" (UniqueName: \"kubernetes.io/projected/fa5607f8-de0f-4bb1-b219-54ef33238b21-kube-api-access-qgm9q\") on node \"addons-268316\" DevicePath \"\""
	Jul 08 19:37:29 addons-268316 kubelet[1276]: I0708 19:37:29.081159    1276 scope.go:117] "RemoveContainer" containerID="15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d"
	Jul 08 19:37:29 addons-268316 kubelet[1276]: I0708 19:37:29.130381    1276 scope.go:117] "RemoveContainer" containerID="15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d"
	Jul 08 19:37:29 addons-268316 kubelet[1276]: E0708 19:37:29.131201    1276 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d\": container with ID starting with 15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d not found: ID does not exist" containerID="15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d"
	Jul 08 19:37:29 addons-268316 kubelet[1276]: I0708 19:37:29.131241    1276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d"} err="failed to get container status \"15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d\": rpc error: code = NotFound desc = could not find container \"15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d\": container with ID starting with 15a517fd0d06500f2a502061901c09a42fce3ea27eeaa04b2d9341c8f3670f8d not found: ID does not exist"
	
	
	==> storage-provisioner [0e0486a262195e25e0cbcb85c7f856a35300a55c800deabb7b3cea1c342fb270] <==
	I0708 19:30:15.131453       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 19:30:15.233564       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 19:30:15.233627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 19:30:15.258347       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 19:30:15.258517       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-268316_4eba19e1-2747-409b-8c55-d9f213142986!
	I0708 19:30:15.258578       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e6a6c31-07a9-4ff0-9bf6-9b1e82c6f6b4", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-268316_4eba19e1-2747-409b-8c55-d9f213142986 became leader
	I0708 19:30:15.361855       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-268316_4eba19e1-2747-409b-8c55-d9f213142986!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-268316 -n addons-268316
helpers_test.go:261: (dbg) Run:  kubectl --context addons-268316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (348.05s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-268316
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-268316: exit status 82 (2m0.46605515s)

                                                
                                                
-- stdout --
	* Stopping node "addons-268316"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-268316" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-268316
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-268316: exit status 11 (21.495631435s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-268316" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-268316
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-268316: exit status 11 (6.14282585s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-268316" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-268316
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-268316: exit status 11 (6.142781396s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-268316" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-787563 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-m6f4d" [3b20398e-d209-483f-83b4-15c8cb779609] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1795: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1795: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-787563 -n functional-787563
functional_test.go:1795: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-07-08 19:54:36.065419142 +0000 UTC m=+1535.978296938
functional_test.go:1795: (dbg) Run:  kubectl --context functional-787563 describe po mysql-64454c8b5c-m6f4d -n default
functional_test.go:1795: (dbg) kubectl --context functional-787563 describe po mysql-64454c8b5c-m6f4d -n default:
Name:             mysql-64454c8b5c-m6f4d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-787563/192.168.39.54
Start Time:       Mon, 08 Jul 2024 19:44:35 +0000
Labels:           app=mysql
pod-template-hash=64454c8b5c
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/mysql-64454c8b5c
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r64md (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r64md:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  10m   default-scheduler  Successfully assigned default/mysql-64454c8b5c-m6f4d to functional-787563
functional_test.go:1795: (dbg) Run:  kubectl --context functional-787563 logs mysql-64454c8b5c-m6f4d -n default
functional_test.go:1795: (dbg) Non-zero exit: kubectl --context functional-787563 logs mysql-64454c8b5c-m6f4d -n default: exit status 1 (68.705621ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-64454c8b5c-m6f4d" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test.go:1795: kubectl --context functional-787563 logs mysql-64454c8b5c-m6f4d -n default: exit status 1
functional_test.go:1797: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-787563 -n functional-787563
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 logs -n 25: (1.531668865s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-787563 ssh findmnt                                            | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-787563                                                     | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port3497785327/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-787563 ssh findmnt                                            | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-787563 ssh -- ls                                              | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-787563 ssh sudo                                               | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-787563                                                     | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3529591915/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-787563                                                     | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3529591915/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-787563                                                     | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3529591915/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-787563 ssh findmnt                                            | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-787563 ssh findmnt                                            | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-787563 ssh findmnt                                            | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-787563 ssh findmnt                                            | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-787563                                                     | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| start          | -p functional-787563                                                     | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|                | --driver=kvm2                                                            |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start          | -p functional-787563                                                     | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | --dry-run --alsologtostderr                                              |                   |         |         |                     |                     |
	|                | -v=1 --driver=kvm2                                                       |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| update-context | functional-787563                                                        | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-787563                                                        | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-787563                                                        | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-787563                                                        | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-787563                                                        | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-787563 ssh pgrep                                              | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-787563 image build -t                                         | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | localhost/my-image:functional-787563                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-787563 image ls                                               | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	| image          | functional-787563                                                        | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-787563                                                        | functional-787563 | jenkins | v1.33.1 | 08 Jul 24 19:44 UTC | 08 Jul 24 19:44 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 19:44:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 19:44:48.119172   23158 out.go:291] Setting OutFile to fd 1 ...
	I0708 19:44:48.119432   23158 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:44:48.119441   23158 out.go:304] Setting ErrFile to fd 2...
	I0708 19:44:48.119464   23158 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:44:48.119681   23158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 19:44:48.120245   23158 out.go:298] Setting JSON to false
	I0708 19:44:48.121179   23158 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1637,"bootTime":1720466251,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 19:44:48.121238   23158 start.go:139] virtualization: kvm guest
	I0708 19:44:48.123359   23158 out.go:177] * [functional-787563] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 19:44:48.124819   23158 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 19:44:48.124853   23158 notify.go:220] Checking for updates...
	I0708 19:44:48.127930   23158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 19:44:48.129200   23158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:44:48.130461   23158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:44:48.131745   23158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 19:44:48.132938   23158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 19:44:48.134510   23158 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:44:48.134943   23158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:44:48.135020   23158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:44:48.150569   23158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41357
	I0708 19:44:48.151043   23158 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:44:48.151639   23158 main.go:141] libmachine: Using API Version  1
	I0708 19:44:48.151656   23158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:44:48.152127   23158 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:44:48.152349   23158 main.go:141] libmachine: (functional-787563) Calling .DriverName
	I0708 19:44:48.152633   23158 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 19:44:48.152973   23158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:44:48.153031   23158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:44:48.168127   23158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
	I0708 19:44:48.168580   23158 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:44:48.169042   23158 main.go:141] libmachine: Using API Version  1
	I0708 19:44:48.169077   23158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:44:48.169407   23158 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:44:48.169593   23158 main.go:141] libmachine: (functional-787563) Calling .DriverName
	I0708 19:44:48.204504   23158 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 19:44:48.205821   23158 start.go:297] selected driver: kvm2
	I0708 19:44:48.205839   23158 start.go:901] validating driver "kvm2" against &{Name:functional-787563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-787563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:44:48.205945   23158 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 19:44:48.206829   23158 cni.go:84] Creating CNI manager for ""
	I0708 19:44:48.206850   23158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 19:44:48.206888   23158 start.go:340] cluster config:
	{Name:functional-787563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-787563 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:44:48.208386   23158 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.860686138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720468476860597283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260166,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be820ee9-3212-41e1-b571-8dcf5d138504 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.861309620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26fab53b-ade7-4338-a173-94c8433c5e79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.861497187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26fab53b-ade7-4338-a173-94c8433c5e79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.862001971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf8421588b0117298fccc6f8245894cd27c5d70082a8990e29882376158ac85f,PodSandboxId:6906d482374e4e80bfab581bfcba445b99528067a6eaaefcb9061b242580c4b5,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c,State:CONTAINER_RUNNING,CreatedAt:1720467893578025671,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68b519c2-8ffe-49fe-b672-d7f3da891367,},Annotations:map[string]string{io.kubernetes.container.hash: da885f30,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d3e83d36c7c825aa1f71f6970c09f5d3ad6a4615746d6bb35ac0fea491df10f,PodSandboxId:b086314d0cce433f567731cf7bb8c223cdf376e487ee94654cbd1088f975a384,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1720467893413505452,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-5p9cj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4bf5e1fd-9d15-4b2f-860b-bb5e3151ddeb,},Annotations:map[string]string{io.kubernetes.container
.hash: f96ef43b,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34aad32c3a4b47350ce271bb67a4db791f93483fcbd357ac4bda5f023e47bbf,PodSandboxId:20e8dc6484984a5123a31ed57d5a591fa0fa19a9c0ea5c27302be875fb8cfbea,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1720467891738692554,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-vvh8x,io.kubernetes.pod.namespace:
kubernetes-dashboard,io.kubernetes.pod.uid: a64a5c6c-5567-43c4-be14-cd4c892cfab2,},Annotations:map[string]string{io.kubernetes.container.hash: ad58d09e,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9721bd240114db30c69bb03b78ae79aef767fc535fe74fe7d1cee32c7f77d1e,PodSandboxId:8e0370ecb1706b12d2590f94139fe3668a0d31112bc8b4e63b64db219e34baa3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1720467881333758088,Labels:map[string]string{io.kubernetes.contai
ner.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 635fb99f-c908-4897-9eca-db45b921fa95,},Annotations:map[string]string{io.kubernetes.container.hash: 50247321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8286bb7be66385d1498cba96fe004499853b5d6642a23d374814c618875170c2,PodSandboxId:705911e0cab5c10771ee8f493dc058a68d1d8afb8b61083a2bca195bf2539e5e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1720467868128724969,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod
.name: hello-node-connect-57b4589c47-vp8fl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe69e989-9c23-422b-9945-e0ce4e0bc5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 99cb2414,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe6f3451c7b707ad7f0a67a8deda8895ddd2868f619654b6ac02b805d94746d,PodSandboxId:40fe09c6bb22a7e810f3e0149e158fa7733f7959e48a36f5c2958bdf7a70427d,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1720467867858216792,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-6d85cfcfd8-gzmzp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 586d22c6-94ee-4bce-bdaf-15cd4bd2a888,},Annotations:map[string]string{io.kubernetes.container.hash: 4ccb58ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8297c83c2f6827537575adb11f8b1ef407fd238d4b5fca15ed6f5db60e8339,PodSandboxId:3797051907dc0455796687621d5efa6a445b6adf899963a60a0829b9c80fdd6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467836889311039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kub
e-proxy-8628r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65592c4e-f2fd-485c-9955-d5fc8919733c,},Annotations:map[string]string{io.kubernetes.container.hash: c33663fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776271f951cf908757333ad39ce6b9d0d640513ee8b48a07d18e3bed1d6337f4,PodSandboxId:d6a605b819e7fcbbfeefbead60f4d584b6974bf347cac81098b903e150218714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467836903179353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisione
r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc86579-517b-4253-a3bb-7b8b65b1c67c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5d2b8c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f6d112092815eed9cc46792b17fa99e3806b009b0ad32c8ae5debc672ab5c4,PodSandboxId:6dc5669e1e7d7bf1a0d5ef1e44f977e1710adfab9049051ac569028c8b4964cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467836885083505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rn86,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 6f1d749e-ad28-4afb-b77a-0c65487d73ef,},Annotations:map[string]string{io.kubernetes.container.hash: fffecc82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1384e09f76d0a283744d925119ceab38c9357ae829e892825ea0640bdfaab4f2,PodSandboxId:0cdbed0491f9c316251043d4198e4a4a998ae43f7cc7f24fa739786e7a4fd416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6
bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720467833268056444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc8bd34f975c456dc342de7f12106d4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c182bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb01c20c64b0ca4468a3d6f00ea95c228dc52e3ca3009e1cd9bdc462e4eb2c49,PodSandboxId:a8166f22451cc528cdb6e859427f24ea45dbbe2291c52e018299ed3ab09f5a7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:
CONTAINER_RUNNING,CreatedAt:1720467833090722462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e77e0701059f90a37e5a5a97c73485,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656b89a78e1b1114dd7145bafc59d647e62d9aacfea6f45d58e95b4da27f215f,PodSandboxId:a2ec6b3d7d56783aa02356b9f900a4781a76bbb50b2c2b3c27e36ef59f14cf8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTA
INER_RUNNING,CreatedAt:1720467833071835259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d0813c778c3a8b4f65eb8ad25d7f62,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0db96b6546264ea14634c40a93c7236d9d2a0a05427de2a07547a79c57514a0,PodSandboxId:887a1237bcd00644e77d85c7ae4c514b9ec90f38c202ca4829889b0391a87831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_
RUNNING,CreatedAt:1720467833062418242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c651de930fe24e21f01e847e92ccd8,},Annotations:map[string]string{io.kubernetes.container.hash: a6167f3e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce2d445cc77e3f911746b0aeff0ba32edd8519f553d9c3d6c8ae4c8ed7f1f6e,PodSandboxId:099bea57c762e72638e07a928306ae6f887e216f6278a4bcf26e416e07a1a671,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720467796784398924,Lab
els:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rn86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f1d749e-ad28-4afb-b77a-0c65487d73ef,},Annotations:map[string]string{io.kubernetes.container.hash: fffecc82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d759a85f805c07da420d8af212a63eda5d55f2bb5bee54df3181714f9cbdbfc,PodSandboxId:a28870285aa86509944e44042c6aab1507f1d54d775dd80d3754950f849d6738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720467796339868082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8628r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65592c4e-f2fd-485c-9955-d5fc8919733c,},Annotations:map[string]string{io.kubernetes.container.hash: c33663fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7a560c4e5ec5b7c28ad79761fc33d764386cf750db9587b525b19d8d89660b,PodSandboxId:4bd0547ee8efe6ccd5acfd0bf383fd0e825a296482beec96832a47864169938b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720467796334573635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc86579-517b-4253-a3bb-7b8b65b1c67c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5d2b8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35857c63fdc767af033560cff3bc9f314e9609ace1e13840898a4a31787f9edb,PodSandboxId:6fc3d06730efbd704f6d46c7d09df9859129f49c7071b10c5722d1b40958ca60,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720467791576264621,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c651de930fe24e21f01e847e92ccd8,},Annotations:map[string]string{io.kubernetes.container.hash: a6167f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2f3102377551c44f1aa1e792e12a4ece62f9ccb4be567a820ca4b3da662b70,PodSandboxId:3a45a0c71bde6c5d352b9a54b6bde562d7baaacb667d75a5f6a0d8e138b2e15e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa13
9453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720467791551165435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e77e0701059f90a37e5a5a97c73485,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0a126fc2a29c0b72755623bed4650a5371459f085284ee13f6bcd1a9fd9321,PodSandboxId:3463cdd9bf52c8941c20f0ea47cc6276a8d7f367117f3aa3795256e315ae9e4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f6
8704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720467791518504329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d0813c778c3a8b4f65eb8ad25d7f62,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26fab53b-ade7-4338-a173-94c8433c5e79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.911389938Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37719e50-21f6-444e-9cf3-eb3d505054e2 name=/runtime.v1.RuntimeService/Version
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.911483660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37719e50-21f6-444e-9cf3-eb3d505054e2 name=/runtime.v1.RuntimeService/Version
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.913174199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a16c469-a7f9-43f8-8ed2-bf827b13fdc5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.914083271Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720468476914056111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260166,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a16c469-a7f9-43f8-8ed2-bf827b13fdc5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.914707050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=428917d9-1694-4dcc-8819-23dc5bd981e4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.914783178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=428917d9-1694-4dcc-8819-23dc5bd981e4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.915170795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf8421588b0117298fccc6f8245894cd27c5d70082a8990e29882376158ac85f,PodSandboxId:6906d482374e4e80bfab581bfcba445b99528067a6eaaefcb9061b242580c4b5,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c,State:CONTAINER_RUNNING,CreatedAt:1720467893578025671,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68b519c2-8ffe-49fe-b672-d7f3da891367,},Annotations:map[string]string{io.kubernetes.container.hash: da885f30,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d3e83d36c7c825aa1f71f6970c09f5d3ad6a4615746d6bb35ac0fea491df10f,PodSandboxId:b086314d0cce433f567731cf7bb8c223cdf376e487ee94654cbd1088f975a384,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1720467893413505452,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-5p9cj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4bf5e1fd-9d15-4b2f-860b-bb5e3151ddeb,},Annotations:map[string]string{io.kubernetes.container
.hash: f96ef43b,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34aad32c3a4b47350ce271bb67a4db791f93483fcbd357ac4bda5f023e47bbf,PodSandboxId:20e8dc6484984a5123a31ed57d5a591fa0fa19a9c0ea5c27302be875fb8cfbea,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1720467891738692554,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-vvh8x,io.kubernetes.pod.namespace:
kubernetes-dashboard,io.kubernetes.pod.uid: a64a5c6c-5567-43c4-be14-cd4c892cfab2,},Annotations:map[string]string{io.kubernetes.container.hash: ad58d09e,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9721bd240114db30c69bb03b78ae79aef767fc535fe74fe7d1cee32c7f77d1e,PodSandboxId:8e0370ecb1706b12d2590f94139fe3668a0d31112bc8b4e63b64db219e34baa3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1720467881333758088,Labels:map[string]string{io.kubernetes.contai
ner.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 635fb99f-c908-4897-9eca-db45b921fa95,},Annotations:map[string]string{io.kubernetes.container.hash: 50247321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8286bb7be66385d1498cba96fe004499853b5d6642a23d374814c618875170c2,PodSandboxId:705911e0cab5c10771ee8f493dc058a68d1d8afb8b61083a2bca195bf2539e5e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1720467868128724969,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod
.name: hello-node-connect-57b4589c47-vp8fl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe69e989-9c23-422b-9945-e0ce4e0bc5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 99cb2414,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe6f3451c7b707ad7f0a67a8deda8895ddd2868f619654b6ac02b805d94746d,PodSandboxId:40fe09c6bb22a7e810f3e0149e158fa7733f7959e48a36f5c2958bdf7a70427d,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1720467867858216792,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-6d85cfcfd8-gzmzp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 586d22c6-94ee-4bce-bdaf-15cd4bd2a888,},Annotations:map[string]string{io.kubernetes.container.hash: 4ccb58ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8297c83c2f6827537575adb11f8b1ef407fd238d4b5fca15ed6f5db60e8339,PodSandboxId:3797051907dc0455796687621d5efa6a445b6adf899963a60a0829b9c80fdd6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467836889311039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kub
e-proxy-8628r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65592c4e-f2fd-485c-9955-d5fc8919733c,},Annotations:map[string]string{io.kubernetes.container.hash: c33663fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776271f951cf908757333ad39ce6b9d0d640513ee8b48a07d18e3bed1d6337f4,PodSandboxId:d6a605b819e7fcbbfeefbead60f4d584b6974bf347cac81098b903e150218714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467836903179353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisione
r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc86579-517b-4253-a3bb-7b8b65b1c67c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5d2b8c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f6d112092815eed9cc46792b17fa99e3806b009b0ad32c8ae5debc672ab5c4,PodSandboxId:6dc5669e1e7d7bf1a0d5ef1e44f977e1710adfab9049051ac569028c8b4964cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467836885083505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rn86,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 6f1d749e-ad28-4afb-b77a-0c65487d73ef,},Annotations:map[string]string{io.kubernetes.container.hash: fffecc82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1384e09f76d0a283744d925119ceab38c9357ae829e892825ea0640bdfaab4f2,PodSandboxId:0cdbed0491f9c316251043d4198e4a4a998ae43f7cc7f24fa739786e7a4fd416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6
bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720467833268056444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc8bd34f975c456dc342de7f12106d4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c182bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb01c20c64b0ca4468a3d6f00ea95c228dc52e3ca3009e1cd9bdc462e4eb2c49,PodSandboxId:a8166f22451cc528cdb6e859427f24ea45dbbe2291c52e018299ed3ab09f5a7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:
CONTAINER_RUNNING,CreatedAt:1720467833090722462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e77e0701059f90a37e5a5a97c73485,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656b89a78e1b1114dd7145bafc59d647e62d9aacfea6f45d58e95b4da27f215f,PodSandboxId:a2ec6b3d7d56783aa02356b9f900a4781a76bbb50b2c2b3c27e36ef59f14cf8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTA
INER_RUNNING,CreatedAt:1720467833071835259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d0813c778c3a8b4f65eb8ad25d7f62,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0db96b6546264ea14634c40a93c7236d9d2a0a05427de2a07547a79c57514a0,PodSandboxId:887a1237bcd00644e77d85c7ae4c514b9ec90f38c202ca4829889b0391a87831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_
RUNNING,CreatedAt:1720467833062418242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c651de930fe24e21f01e847e92ccd8,},Annotations:map[string]string{io.kubernetes.container.hash: a6167f3e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce2d445cc77e3f911746b0aeff0ba32edd8519f553d9c3d6c8ae4c8ed7f1f6e,PodSandboxId:099bea57c762e72638e07a928306ae6f887e216f6278a4bcf26e416e07a1a671,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720467796784398924,Lab
els:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rn86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f1d749e-ad28-4afb-b77a-0c65487d73ef,},Annotations:map[string]string{io.kubernetes.container.hash: fffecc82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d759a85f805c07da420d8af212a63eda5d55f2bb5bee54df3181714f9cbdbfc,PodSandboxId:a28870285aa86509944e44042c6aab1507f1d54d775dd80d3754950f849d6738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720467796339868082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8628r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65592c4e-f2fd-485c-9955-d5fc8919733c,},Annotations:map[string]string{io.kubernetes.container.hash: c33663fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7a560c4e5ec5b7c28ad79761fc33d764386cf750db9587b525b19d8d89660b,PodSandboxId:4bd0547ee8efe6ccd5acfd0bf383fd0e825a296482beec96832a47864169938b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720467796334573635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc86579-517b-4253-a3bb-7b8b65b1c67c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5d2b8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35857c63fdc767af033560cff3bc9f314e9609ace1e13840898a4a31787f9edb,PodSandboxId:6fc3d06730efbd704f6d46c7d09df9859129f49c7071b10c5722d1b40958ca60,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720467791576264621,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c651de930fe24e21f01e847e92ccd8,},Annotations:map[string]string{io.kubernetes.container.hash: a6167f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2f3102377551c44f1aa1e792e12a4ece62f9ccb4be567a820ca4b3da662b70,PodSandboxId:3a45a0c71bde6c5d352b9a54b6bde562d7baaacb667d75a5f6a0d8e138b2e15e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa13
9453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720467791551165435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e77e0701059f90a37e5a5a97c73485,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0a126fc2a29c0b72755623bed4650a5371459f085284ee13f6bcd1a9fd9321,PodSandboxId:3463cdd9bf52c8941c20f0ea47cc6276a8d7f367117f3aa3795256e315ae9e4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f6
8704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720467791518504329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d0813c778c3a8b4f65eb8ad25d7f62,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=428917d9-1694-4dcc-8819-23dc5bd981e4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.953247284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a954997a-9214-4c8f-b728-62bc36fa663c name=/runtime.v1.RuntimeService/Version
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.953402947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a954997a-9214-4c8f-b728-62bc36fa663c name=/runtime.v1.RuntimeService/Version
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.955058632Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19abae00-ec50-4ed0-a79c-50177960e291 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.955861133Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720468476955834028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260166,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19abae00-ec50-4ed0-a79c-50177960e291 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.956453490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=447f717f-f035-4128-8998-2002996a2f6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.956510213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=447f717f-f035-4128-8998-2002996a2f6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:36 functional-787563 crio[4260]: time="2024-07-08 19:54:36.956936762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf8421588b0117298fccc6f8245894cd27c5d70082a8990e29882376158ac85f,PodSandboxId:6906d482374e4e80bfab581bfcba445b99528067a6eaaefcb9061b242580c4b5,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c,State:CONTAINER_RUNNING,CreatedAt:1720467893578025671,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68b519c2-8ffe-49fe-b672-d7f3da891367,},Annotations:map[string]string{io.kubernetes.container.hash: da885f30,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d3e83d36c7c825aa1f71f6970c09f5d3ad6a4615746d6bb35ac0fea491df10f,PodSandboxId:b086314d0cce433f567731cf7bb8c223cdf376e487ee94654cbd1088f975a384,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1720467893413505452,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-5p9cj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4bf5e1fd-9d15-4b2f-860b-bb5e3151ddeb,},Annotations:map[string]string{io.kubernetes.container
.hash: f96ef43b,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34aad32c3a4b47350ce271bb67a4db791f93483fcbd357ac4bda5f023e47bbf,PodSandboxId:20e8dc6484984a5123a31ed57d5a591fa0fa19a9c0ea5c27302be875fb8cfbea,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1720467891738692554,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-vvh8x,io.kubernetes.pod.namespace:
kubernetes-dashboard,io.kubernetes.pod.uid: a64a5c6c-5567-43c4-be14-cd4c892cfab2,},Annotations:map[string]string{io.kubernetes.container.hash: ad58d09e,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9721bd240114db30c69bb03b78ae79aef767fc535fe74fe7d1cee32c7f77d1e,PodSandboxId:8e0370ecb1706b12d2590f94139fe3668a0d31112bc8b4e63b64db219e34baa3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1720467881333758088,Labels:map[string]string{io.kubernetes.contai
ner.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 635fb99f-c908-4897-9eca-db45b921fa95,},Annotations:map[string]string{io.kubernetes.container.hash: 50247321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8286bb7be66385d1498cba96fe004499853b5d6642a23d374814c618875170c2,PodSandboxId:705911e0cab5c10771ee8f493dc058a68d1d8afb8b61083a2bca195bf2539e5e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1720467868128724969,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod
.name: hello-node-connect-57b4589c47-vp8fl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe69e989-9c23-422b-9945-e0ce4e0bc5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 99cb2414,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe6f3451c7b707ad7f0a67a8deda8895ddd2868f619654b6ac02b805d94746d,PodSandboxId:40fe09c6bb22a7e810f3e0149e158fa7733f7959e48a36f5c2958bdf7a70427d,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1720467867858216792,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-6d85cfcfd8-gzmzp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 586d22c6-94ee-4bce-bdaf-15cd4bd2a888,},Annotations:map[string]string{io.kubernetes.container.hash: 4ccb58ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8297c83c2f6827537575adb11f8b1ef407fd238d4b5fca15ed6f5db60e8339,PodSandboxId:3797051907dc0455796687621d5efa6a445b6adf899963a60a0829b9c80fdd6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467836889311039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kub
e-proxy-8628r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65592c4e-f2fd-485c-9955-d5fc8919733c,},Annotations:map[string]string{io.kubernetes.container.hash: c33663fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776271f951cf908757333ad39ce6b9d0d640513ee8b48a07d18e3bed1d6337f4,PodSandboxId:d6a605b819e7fcbbfeefbead60f4d584b6974bf347cac81098b903e150218714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467836903179353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisione
r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc86579-517b-4253-a3bb-7b8b65b1c67c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5d2b8c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f6d112092815eed9cc46792b17fa99e3806b009b0ad32c8ae5debc672ab5c4,PodSandboxId:6dc5669e1e7d7bf1a0d5ef1e44f977e1710adfab9049051ac569028c8b4964cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467836885083505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rn86,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 6f1d749e-ad28-4afb-b77a-0c65487d73ef,},Annotations:map[string]string{io.kubernetes.container.hash: fffecc82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1384e09f76d0a283744d925119ceab38c9357ae829e892825ea0640bdfaab4f2,PodSandboxId:0cdbed0491f9c316251043d4198e4a4a998ae43f7cc7f24fa739786e7a4fd416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6
bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720467833268056444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc8bd34f975c456dc342de7f12106d4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c182bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb01c20c64b0ca4468a3d6f00ea95c228dc52e3ca3009e1cd9bdc462e4eb2c49,PodSandboxId:a8166f22451cc528cdb6e859427f24ea45dbbe2291c52e018299ed3ab09f5a7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:
CONTAINER_RUNNING,CreatedAt:1720467833090722462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e77e0701059f90a37e5a5a97c73485,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656b89a78e1b1114dd7145bafc59d647e62d9aacfea6f45d58e95b4da27f215f,PodSandboxId:a2ec6b3d7d56783aa02356b9f900a4781a76bbb50b2c2b3c27e36ef59f14cf8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTA
INER_RUNNING,CreatedAt:1720467833071835259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d0813c778c3a8b4f65eb8ad25d7f62,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0db96b6546264ea14634c40a93c7236d9d2a0a05427de2a07547a79c57514a0,PodSandboxId:887a1237bcd00644e77d85c7ae4c514b9ec90f38c202ca4829889b0391a87831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_
RUNNING,CreatedAt:1720467833062418242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c651de930fe24e21f01e847e92ccd8,},Annotations:map[string]string{io.kubernetes.container.hash: a6167f3e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce2d445cc77e3f911746b0aeff0ba32edd8519f553d9c3d6c8ae4c8ed7f1f6e,PodSandboxId:099bea57c762e72638e07a928306ae6f887e216f6278a4bcf26e416e07a1a671,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720467796784398924,Lab
els:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rn86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f1d749e-ad28-4afb-b77a-0c65487d73ef,},Annotations:map[string]string{io.kubernetes.container.hash: fffecc82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d759a85f805c07da420d8af212a63eda5d55f2bb5bee54df3181714f9cbdbfc,PodSandboxId:a28870285aa86509944e44042c6aab1507f1d54d775dd80d3754950f849d6738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720467796339868082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8628r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65592c4e-f2fd-485c-9955-d5fc8919733c,},Annotations:map[string]string{io.kubernetes.container.hash: c33663fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7a560c4e5ec5b7c28ad79761fc33d764386cf750db9587b525b19d8d89660b,PodSandboxId:4bd0547ee8efe6ccd5acfd0bf383fd0e825a296482beec96832a47864169938b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720467796334573635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc86579-517b-4253-a3bb-7b8b65b1c67c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5d2b8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35857c63fdc767af033560cff3bc9f314e9609ace1e13840898a4a31787f9edb,PodSandboxId:6fc3d06730efbd704f6d46c7d09df9859129f49c7071b10c5722d1b40958ca60,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720467791576264621,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c651de930fe24e21f01e847e92ccd8,},Annotations:map[string]string{io.kubernetes.container.hash: a6167f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2f3102377551c44f1aa1e792e12a4ece62f9ccb4be567a820ca4b3da662b70,PodSandboxId:3a45a0c71bde6c5d352b9a54b6bde562d7baaacb667d75a5f6a0d8e138b2e15e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa13
9453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720467791551165435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e77e0701059f90a37e5a5a97c73485,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0a126fc2a29c0b72755623bed4650a5371459f085284ee13f6bcd1a9fd9321,PodSandboxId:3463cdd9bf52c8941c20f0ea47cc6276a8d7f367117f3aa3795256e315ae9e4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f6
8704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720467791518504329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d0813c778c3a8b4f65eb8ad25d7f62,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=447f717f-f035-4128-8998-2002996a2f6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:37 functional-787563 crio[4260]: time="2024-07-08 19:54:37.005751884Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57210825-ccf6-4902-8599-7c45c107ae22 name=/runtime.v1.RuntimeService/Version
	Jul 08 19:54:37 functional-787563 crio[4260]: time="2024-07-08 19:54:37.005825180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57210825-ccf6-4902-8599-7c45c107ae22 name=/runtime.v1.RuntimeService/Version
	Jul 08 19:54:37 functional-787563 crio[4260]: time="2024-07-08 19:54:37.007541982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b2a3a4a-87bb-4a4c-bffd-d6b204755ab4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:54:37 functional-787563 crio[4260]: time="2024-07-08 19:54:37.009939691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720468477009907243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260166,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b2a3a4a-87bb-4a4c-bffd-d6b204755ab4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 19:54:37 functional-787563 crio[4260]: time="2024-07-08 19:54:37.010812440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8ab7933-e1af-41ab-8173-dbda5a67356b name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:37 functional-787563 crio[4260]: time="2024-07-08 19:54:37.010872019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8ab7933-e1af-41ab-8173-dbda5a67356b name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 19:54:37 functional-787563 crio[4260]: time="2024-07-08 19:54:37.011274472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf8421588b0117298fccc6f8245894cd27c5d70082a8990e29882376158ac85f,PodSandboxId:6906d482374e4e80bfab581bfcba445b99528067a6eaaefcb9061b242580c4b5,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c,State:CONTAINER_RUNNING,CreatedAt:1720467893578025671,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68b519c2-8ffe-49fe-b672-d7f3da891367,},Annotations:map[string]string{io.kubernetes.container.hash: da885f30,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d3e83d36c7c825aa1f71f6970c09f5d3ad6a4615746d6bb35ac0fea491df10f,PodSandboxId:b086314d0cce433f567731cf7bb8c223cdf376e487ee94654cbd1088f975a384,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1720467893413505452,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-5p9cj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4bf5e1fd-9d15-4b2f-860b-bb5e3151ddeb,},Annotations:map[string]string{io.kubernetes.container
.hash: f96ef43b,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34aad32c3a4b47350ce271bb67a4db791f93483fcbd357ac4bda5f023e47bbf,PodSandboxId:20e8dc6484984a5123a31ed57d5a591fa0fa19a9c0ea5c27302be875fb8cfbea,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1720467891738692554,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-vvh8x,io.kubernetes.pod.namespace:
kubernetes-dashboard,io.kubernetes.pod.uid: a64a5c6c-5567-43c4-be14-cd4c892cfab2,},Annotations:map[string]string{io.kubernetes.container.hash: ad58d09e,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9721bd240114db30c69bb03b78ae79aef767fc535fe74fe7d1cee32c7f77d1e,PodSandboxId:8e0370ecb1706b12d2590f94139fe3668a0d31112bc8b4e63b64db219e34baa3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1720467881333758088,Labels:map[string]string{io.kubernetes.contai
ner.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 635fb99f-c908-4897-9eca-db45b921fa95,},Annotations:map[string]string{io.kubernetes.container.hash: 50247321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8286bb7be66385d1498cba96fe004499853b5d6642a23d374814c618875170c2,PodSandboxId:705911e0cab5c10771ee8f493dc058a68d1d8afb8b61083a2bca195bf2539e5e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1720467868128724969,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod
.name: hello-node-connect-57b4589c47-vp8fl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe69e989-9c23-422b-9945-e0ce4e0bc5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 99cb2414,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe6f3451c7b707ad7f0a67a8deda8895ddd2868f619654b6ac02b805d94746d,PodSandboxId:40fe09c6bb22a7e810f3e0149e158fa7733f7959e48a36f5c2958bdf7a70427d,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1720467867858216792,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-6d85cfcfd8-gzmzp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 586d22c6-94ee-4bce-bdaf-15cd4bd2a888,},Annotations:map[string]string{io.kubernetes.container.hash: 4ccb58ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8297c83c2f6827537575adb11f8b1ef407fd238d4b5fca15ed6f5db60e8339,PodSandboxId:3797051907dc0455796687621d5efa6a445b6adf899963a60a0829b9c80fdd6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720467836889311039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kub
e-proxy-8628r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65592c4e-f2fd-485c-9955-d5fc8919733c,},Annotations:map[string]string{io.kubernetes.container.hash: c33663fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776271f951cf908757333ad39ce6b9d0d640513ee8b48a07d18e3bed1d6337f4,PodSandboxId:d6a605b819e7fcbbfeefbead60f4d584b6974bf347cac81098b903e150218714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720467836903179353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisione
r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc86579-517b-4253-a3bb-7b8b65b1c67c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5d2b8c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f6d112092815eed9cc46792b17fa99e3806b009b0ad32c8ae5debc672ab5c4,PodSandboxId:6dc5669e1e7d7bf1a0d5ef1e44f977e1710adfab9049051ac569028c8b4964cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720467836885083505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rn86,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 6f1d749e-ad28-4afb-b77a-0c65487d73ef,},Annotations:map[string]string{io.kubernetes.container.hash: fffecc82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1384e09f76d0a283744d925119ceab38c9357ae829e892825ea0640bdfaab4f2,PodSandboxId:0cdbed0491f9c316251043d4198e4a4a998ae43f7cc7f24fa739786e7a4fd416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6
bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720467833268056444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dc8bd34f975c456dc342de7f12106d4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c182bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb01c20c64b0ca4468a3d6f00ea95c228dc52e3ca3009e1cd9bdc462e4eb2c49,PodSandboxId:a8166f22451cc528cdb6e859427f24ea45dbbe2291c52e018299ed3ab09f5a7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:
CONTAINER_RUNNING,CreatedAt:1720467833090722462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e77e0701059f90a37e5a5a97c73485,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656b89a78e1b1114dd7145bafc59d647e62d9aacfea6f45d58e95b4da27f215f,PodSandboxId:a2ec6b3d7d56783aa02356b9f900a4781a76bbb50b2c2b3c27e36ef59f14cf8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTA
INER_RUNNING,CreatedAt:1720467833071835259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d0813c778c3a8b4f65eb8ad25d7f62,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0db96b6546264ea14634c40a93c7236d9d2a0a05427de2a07547a79c57514a0,PodSandboxId:887a1237bcd00644e77d85c7ae4c514b9ec90f38c202ca4829889b0391a87831,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_
RUNNING,CreatedAt:1720467833062418242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c651de930fe24e21f01e847e92ccd8,},Annotations:map[string]string{io.kubernetes.container.hash: a6167f3e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce2d445cc77e3f911746b0aeff0ba32edd8519f553d9c3d6c8ae4c8ed7f1f6e,PodSandboxId:099bea57c762e72638e07a928306ae6f887e216f6278a4bcf26e416e07a1a671,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720467796784398924,Lab
els:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2rn86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f1d749e-ad28-4afb-b77a-0c65487d73ef,},Annotations:map[string]string{io.kubernetes.container.hash: fffecc82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d759a85f805c07da420d8af212a63eda5d55f2bb5bee54df3181714f9cbdbfc,PodSandboxId:a28870285aa86509944e44042c6aab1507f1d54d775dd80d3754950f849d6738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720467796339868082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8628r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65592c4e-f2fd-485c-9955-d5fc8919733c,},Annotations:map[string]string{io.kubernetes.container.hash: c33663fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7a560c4e5ec5b7c28ad79761fc33d764386cf750db9587b525b19d8d89660b,PodSandboxId:4bd0547ee8efe6ccd5acfd0bf383fd0e825a296482beec96832a47864169938b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720467796334573635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc86579-517b-4253-a3bb-7b8b65b1c67c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5d2b8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35857c63fdc767af033560cff3bc9f314e9609ace1e13840898a4a31787f9edb,PodSandboxId:6fc3d06730efbd704f6d46c7d09df9859129f49c7071b10c5722d1b40958ca60,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720467791576264621,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c651de930fe24e21f01e847e92ccd8,},Annotations:map[string]string{io.kubernetes.container.hash: a6167f3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2f3102377551c44f1aa1e792e12a4ece62f9ccb4be567a820ca4b3da662b70,PodSandboxId:3a45a0c71bde6c5d352b9a54b6bde562d7baaacb667d75a5f6a0d8e138b2e15e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa13
9453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720467791551165435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e77e0701059f90a37e5a5a97c73485,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0a126fc2a29c0b72755623bed4650a5371459f085284ee13f6bcd1a9fd9321,PodSandboxId:3463cdd9bf52c8941c20f0ea47cc6276a8d7f367117f3aa3795256e315ae9e4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f6
8704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720467791518504329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-787563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d0813c778c3a8b4f65eb8ad25d7f62,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8ab7933-e1af-41ab-8173-dbda5a67356b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	cf8421588b011       docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df                  9 minutes ago       Running             myfrontend                  0                   6906d482374e4       sp-pod
	6d3e83d36c7c8       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   b086314d0cce4       dashboard-metrics-scraper-b5fc48f67-5p9cj
	d34aad32c3a4b       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   20e8dc6484984       kubernetes-dashboard-779776cb65-vvh8x
	e9721bd240114       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              9 minutes ago       Exited              mount-munger                0                   8e0370ecb1706       busybox-mount
	8286bb7be6638       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 10 minutes ago      Running             echoserver                  0                   705911e0cab5c       hello-node-connect-57b4589c47-vp8fl
	1fe6f3451c7b7       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   40fe09c6bb22a       hello-node-6d85cfcfd8-gzmzp
	776271f951cf9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   d6a605b819e7f       storage-provisioner
	dc8297c83c2f6       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                 10 minutes ago      Running             kube-proxy                  2                   3797051907dc0       kube-proxy-8628r
	47f6d11209281       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 10 minutes ago      Running             coredns                     2                   6dc5669e1e7d7       coredns-7db6d8ff4d-2rn86
	1384e09f76d0a       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                 10 minutes ago      Running             kube-apiserver              0                   0cdbed0491f9c       kube-apiserver-functional-787563
	cb01c20c64b0c       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                 10 minutes ago      Running             kube-scheduler              2                   a8166f22451cc       kube-scheduler-functional-787563
	656b89a78e1b1       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                 10 minutes ago      Running             kube-controller-manager     2                   a2ec6b3d7d567       kube-controller-manager-functional-787563
	e0db96b654626       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 10 minutes ago      Running             etcd                        2                   887a1237bcd00       etcd-functional-787563
	dce2d445cc77e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 11 minutes ago      Exited              coredns                     1                   099bea57c762e       coredns-7db6d8ff4d-2rn86
	3d759a85f805c       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                 11 minutes ago      Exited              kube-proxy                  1                   a28870285aa86       kube-proxy-8628r
	4c7a560c4e5ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   4bd0547ee8efe       storage-provisioner
	35857c63fdc76       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 11 minutes ago      Exited              etcd                        1                   6fc3d06730efb       etcd-functional-787563
	7e2f310237755       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                 11 minutes ago      Exited              kube-scheduler              1                   3a45a0c71bde6       kube-scheduler-functional-787563
	4c0a126fc2a29       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                 11 minutes ago      Exited              kube-controller-manager     1                   3463cdd9bf52c       kube-controller-manager-functional-787563
	
	
	==> coredns [47f6d112092815eed9cc46792b17fa99e3806b009b0ad32c8ae5debc672ab5c4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41730 - 56414 "HINFO IN 5973878966027547211.5681778345119228743. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020102656s
	
	
	==> coredns [dce2d445cc77e3f911746b0aeff0ba32edd8519f553d9c3d6c8ae4c8ed7f1f6e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52946 - 31283 "HINFO IN 1108960466725565054.3133524542358215125. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013095795s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-787563
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-787563
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=functional-787563
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T19_41_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:41:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-787563
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:54:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:50:33 +0000   Mon, 08 Jul 2024 19:41:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:50:33 +0000   Mon, 08 Jul 2024 19:41:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:50:33 +0000   Mon, 08 Jul 2024 19:41:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:50:33 +0000   Mon, 08 Jul 2024 19:41:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    functional-787563
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 abad53aeff314974ad0c48016a0097f2
	  System UUID:                abad53ae-ff31-4974-ad0c-48016a0097f2
	  Boot ID:                    7e7eca16-d2e1-4c76-a6b9-5a5d4157f001
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d85cfcfd8-gzmzp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  default                     hello-node-connect-57b4589c47-vp8fl          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  default                     mysql-64454c8b5c-m6f4d                       600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    10m
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 coredns-7db6d8ff4d-2rn86                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     12m
	  kube-system                 etcd-functional-787563                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kube-apiserver-functional-787563             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-functional-787563    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-8628r                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-functional-787563             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-5p9cj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-vvh8x        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%!)(MISSING)  700m (35%!)(MISSING)
	  memory             682Mi (17%!)(MISSING)  870Mi (22%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node functional-787563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node functional-787563 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node functional-787563 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                12m                kubelet          Node functional-787563 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-787563 event: Registered Node functional-787563 in Controller
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-787563 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-787563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-787563 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-787563 event: Registered Node functional-787563 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-787563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-787563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-787563 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-787563 event: Registered Node functional-787563 in Controller
	
	
	==> dmesg <==
	[  +0.135343] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.279069] systemd-fstab-generator[2377]: Ignoring "noauto" option for root device
	[  +6.423709] systemd-fstab-generator[2503]: Ignoring "noauto" option for root device
	[  +0.076723] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.807643] systemd-fstab-generator[2627]: Ignoring "noauto" option for root device
	[  +5.607519] kauditd_printk_skb: 75 callbacks suppressed
	[ +11.841723] kauditd_printk_skb: 35 callbacks suppressed
	[  +3.160000] systemd-fstab-generator[3375]: Ignoring "noauto" option for root device
	[ +16.670182] systemd-fstab-generator[4180]: Ignoring "noauto" option for root device
	[  +0.074433] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.059604] systemd-fstab-generator[4192]: Ignoring "noauto" option for root device
	[  +0.171680] systemd-fstab-generator[4206]: Ignoring "noauto" option for root device
	[  +0.139854] systemd-fstab-generator[4218]: Ignoring "noauto" option for root device
	[  +0.288079] systemd-fstab-generator[4246]: Ignoring "noauto" option for root device
	[  +1.331027] systemd-fstab-generator[4723]: Ignoring "noauto" option for root device
	[  +2.428922] systemd-fstab-generator[4847]: Ignoring "noauto" option for root device
	[  +0.789191] kauditd_printk_skb: 206 callbacks suppressed
	[Jul 8 19:44] kauditd_printk_skb: 35 callbacks suppressed
	[  +3.700244] systemd-fstab-generator[5387]: Ignoring "noauto" option for root device
	[  +6.242134] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.464974] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.604931] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.386512] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.455255] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.489313] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> etcd [35857c63fdc767af033560cff3bc9f314e9609ace1e13840898a4a31787f9edb] <==
	{"level":"info","ts":"2024-07-08T19:43:12.173137Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:13.39364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:13.393794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:13.393861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 received MsgPreVoteResp from 731f5c40d4af6217 at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:13.393897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became candidate at term 3"}
	{"level":"info","ts":"2024-07-08T19:43:13.39392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 received MsgVoteResp from 731f5c40d4af6217 at term 3"}
	{"level":"info","ts":"2024-07-08T19:43:13.393947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became leader at term 3"}
	{"level":"info","ts":"2024-07-08T19:43:13.393972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 731f5c40d4af6217 elected leader 731f5c40d4af6217 at term 3"}
	{"level":"info","ts":"2024-07-08T19:43:13.39922Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:13.401092Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.54:2379"}
	{"level":"info","ts":"2024-07-08T19:43:13.401437Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:13.399168Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"731f5c40d4af6217","local-member-attributes":"{Name:functional-787563 ClientURLs:[https://192.168.39.54:2379]}","request-path":"/0/members/731f5c40d4af6217/attributes","cluster-id":"ad335f297da439ca","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:43:13.401696Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:43:13.401731Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T19:43:13.403122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T19:43:41.671587Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-08T19:43:41.671632Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-787563","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"]}
	{"level":"warn","ts":"2024-07-08T19:43:41.671734Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T19:43:41.671832Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T19:43:41.767754Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T19:43:41.767808Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-08T19:43:41.767869Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"731f5c40d4af6217","current-leader-member-id":"731f5c40d4af6217"}
	{"level":"info","ts":"2024-07-08T19:43:41.771269Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-07-08T19:43:41.771613Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-07-08T19:43:41.771638Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-787563","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"]}
	
	
	==> etcd [e0db96b6546264ea14634c40a93c7236d9d2a0a05427de2a07547a79c57514a0] <==
	{"level":"info","ts":"2024-07-08T19:43:54.793623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 731f5c40d4af6217 elected leader 731f5c40d4af6217 at term 4"}
	{"level":"info","ts":"2024-07-08T19:43:54.802679Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"731f5c40d4af6217","local-member-attributes":"{Name:functional-787563 ClientURLs:[https://192.168.39.54:2379]}","request-path":"/0/members/731f5c40d4af6217/attributes","cluster-id":"ad335f297da439ca","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:43:54.802796Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:54.803247Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:54.805132Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T19:43:54.80537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.54:2379"}
	{"level":"info","ts":"2024-07-08T19:43:54.805423Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:43:54.805452Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T19:44:31.629214Z","caller":"traceutil/trace.go:171","msg":"trace[1919870287] transaction","detail":"{read_only:false; response_revision:705; number_of_response:1; }","duration":"224.9805ms","start":"2024-07-08T19:44:31.404194Z","end":"2024-07-08T19:44:31.629174Z","steps":["trace[1919870287] 'process raft request'  (duration: 224.820653ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:44:37.361701Z","caller":"traceutil/trace.go:171","msg":"trace[1854933521] transaction","detail":"{read_only:false; response_revision:739; number_of_response:1; }","duration":"313.922431ms","start":"2024-07-08T19:44:37.047755Z","end":"2024-07-08T19:44:37.361678Z","steps":["trace[1854933521] 'process raft request'  (duration: 313.769758ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:44:37.36269Z","caller":"traceutil/trace.go:171","msg":"trace[1315802337] linearizableReadLoop","detail":"{readStateIndex:806; appliedIndex:806; }","duration":"105.779632ms","start":"2024-07-08T19:44:37.256858Z","end":"2024-07-08T19:44:37.362638Z","steps":["trace[1315802337] 'read index received'  (duration: 105.773873ms)","trace[1315802337] 'applied index is now lower than readState.Index'  (duration: 4.867µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-08T19:44:37.362835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.958959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-08T19:44:37.362949Z","caller":"traceutil/trace.go:171","msg":"trace[236644429] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:739; }","duration":"106.082168ms","start":"2024-07-08T19:44:37.256854Z","end":"2024-07-08T19:44:37.362936Z","steps":["trace[236644429] 'agreement among raft nodes before linearized reading'  (duration: 105.904894ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:44:37.363102Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-08T19:44:37.047735Z","time spent":"314.027787ms","remote":"127.0.0.1:35340","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":681,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-oin4oh2zcz2sxkcuqhcfbk2lou\" mod_revision:684 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-oin4oh2zcz2sxkcuqhcfbk2lou\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-oin4oh2zcz2sxkcuqhcfbk2lou\" > >"}
	{"level":"info","ts":"2024-07-08T19:44:45.597709Z","caller":"traceutil/trace.go:171","msg":"trace[549689075] transaction","detail":"{read_only:false; response_revision:761; number_of_response:1; }","duration":"214.874769ms","start":"2024-07-08T19:44:45.382819Z","end":"2024-07-08T19:44:45.597694Z","steps":["trace[549689075] 'process raft request'  (duration: 214.768934ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:44:51.308085Z","caller":"traceutil/trace.go:171","msg":"trace[1623834493] transaction","detail":"{read_only:false; response_revision:838; number_of_response:1; }","duration":"487.932946ms","start":"2024-07-08T19:44:50.820138Z","end":"2024-07-08T19:44:51.308071Z","steps":["trace[1623834493] 'process raft request'  (duration: 487.797496ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T19:44:51.308215Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-08T19:44:50.820123Z","time spent":"488.026075ms","remote":"127.0.0.1:35252","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:837 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-08T19:44:57.621837Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-08T19:44:57.311058Z","time spent":"310.775554ms","remote":"127.0.0.1:35100","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-07-08T19:44:57.622057Z","caller":"traceutil/trace.go:171","msg":"trace[1111751834] linearizableReadLoop","detail":"{readStateIndex:934; appliedIndex:934; }","duration":"284.204487ms","start":"2024-07-08T19:44:57.337831Z","end":"2024-07-08T19:44:57.622036Z","steps":["trace[1111751834] 'read index received'  (duration: 284.198169ms)","trace[1111751834] 'applied index is now lower than readState.Index'  (duration: 4.875µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-08T19:44:57.622184Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.337953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-07-08T19:44:57.622307Z","caller":"traceutil/trace.go:171","msg":"trace[1273244464] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:862; }","duration":"284.489338ms","start":"2024-07-08T19:44:57.337806Z","end":"2024-07-08T19:44:57.622295Z","steps":["trace[1273244464] 'agreement among raft nodes before linearized reading'  (duration: 284.330468ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:45:01.803032Z","caller":"traceutil/trace.go:171","msg":"trace[1236560854] transaction","detail":"{read_only:false; response_revision:869; number_of_response:1; }","duration":"138.899692ms","start":"2024-07-08T19:45:01.664096Z","end":"2024-07-08T19:45:01.802996Z","steps":["trace[1236560854] 'process raft request'  (duration: 138.316629ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T19:53:54.841312Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1055}
	{"level":"info","ts":"2024-07-08T19:53:54.869565Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1055,"took":"27.334108ms","hash":3188716165,"current-db-size-bytes":3784704,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1417216,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2024-07-08T19:53:54.869692Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3188716165,"revision":1055,"compact-revision":-1}
	
	
	==> kernel <==
	 19:54:37 up 13 min,  0 users,  load average: 0.08, 0.19, 0.18
	Linux functional-787563 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1384e09f76d0a283744d925119ceab38c9357ae829e892825ea0640bdfaab4f2] <==
	E0708 19:43:56.101414       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0708 19:43:56.117826       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0708 19:43:56.117986       1 aggregator.go:165] initial CRD sync complete...
	I0708 19:43:56.118018       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 19:43:56.118024       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 19:43:56.118029       1 cache.go:39] Caches are synced for autoregister controller
	I0708 19:43:56.154215       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0708 19:43:56.985903       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0708 19:43:57.872958       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 19:43:57.892206       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 19:43:57.930578       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 19:43:57.964981       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 19:43:57.971394       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0708 19:44:09.428639       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 19:44:09.529044       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 19:44:19.374086       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.251.37"}
	I0708 19:44:23.701520       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0708 19:44:23.823761       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.107.237"}
	I0708 19:44:25.874561       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.126.68"}
	I0708 19:44:35.726521       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.110.63.192"}
	I0708 19:44:45.380443       1 controller.go:615] quota admission added evaluator for: namespaces
	I0708 19:44:45.941216       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.146.33"}
	I0708 19:44:45.999001       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.161.30"}
	E0708 19:44:46.008267       1 conn.go:339] Error on socket receive: read tcp 192.168.39.54:8441->192.168.39.1:33952: use of closed network connection
	E0708 19:44:59.702023       1 conn.go:339] Error on socket receive: read tcp 192.168.39.54:8441->192.168.39.1:35550: use of closed network connection
	
	
	==> kube-controller-manager [4c0a126fc2a29c0b72755623bed4650a5371459f085284ee13f6bcd1a9fd9321] <==
	I0708 19:43:27.877449       1 shared_informer.go:320] Caches are synced for PVC protection
	I0708 19:43:27.885726       1 shared_informer.go:320] Caches are synced for crt configmap
	I0708 19:43:27.889126       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0708 19:43:27.891437       1 shared_informer.go:320] Caches are synced for service account
	I0708 19:43:27.893258       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0708 19:43:27.893374       1 shared_informer.go:320] Caches are synced for endpoint
	I0708 19:43:27.895591       1 shared_informer.go:320] Caches are synced for expand
	I0708 19:43:27.898527       1 shared_informer.go:320] Caches are synced for job
	I0708 19:43:27.905284       1 shared_informer.go:320] Caches are synced for deployment
	I0708 19:43:27.907798       1 shared_informer.go:320] Caches are synced for HPA
	I0708 19:43:27.907846       1 shared_informer.go:320] Caches are synced for GC
	I0708 19:43:27.907893       1 shared_informer.go:320] Caches are synced for disruption
	I0708 19:43:27.909108       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0708 19:43:27.909303       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.804µs"
	I0708 19:43:27.915503       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0708 19:43:27.915573       1 shared_informer.go:320] Caches are synced for namespace
	I0708 19:43:27.959035       1 shared_informer.go:320] Caches are synced for daemon sets
	I0708 19:43:28.002893       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0708 19:43:28.021304       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0708 19:43:28.026496       1 shared_informer.go:320] Caches are synced for stateful set
	I0708 19:43:28.095794       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:43:28.107130       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:43:28.525715       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:43:28.525783       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:43:28.525792       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [656b89a78e1b1114dd7145bafc59d647e62d9aacfea6f45d58e95b4da27f215f] <==
	E0708 19:44:45.727775       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:44:45.749484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="48.067469ms"
	E0708 19:44:45.749634       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:44:45.749972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="22.129824ms"
	E0708 19:44:45.750018       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:44:45.755395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="4.665669ms"
	E0708 19:44:45.755546       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:44:45.755497       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="4.691213ms"
	E0708 19:44:45.755717       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:44:45.761018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="5.432579ms"
	E0708 19:44:45.761912       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:44:45.769275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="8.452057ms"
	E0708 19:44:45.769414       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:44:45.817397       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="49.846528ms"
	I0708 19:44:45.864649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="54.455535ms"
	I0708 19:44:45.875028       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="57.479398ms"
	I0708 19:44:45.883219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="17.348931ms"
	I0708 19:44:45.912397       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="37.25311ms"
	I0708 19:44:45.912607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="114.711µs"
	I0708 19:44:45.916581       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="33.236779ms"
	I0708 19:44:45.916804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="96.558µs"
	I0708 19:44:52.474990       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="20.198821ms"
	I0708 19:44:52.475073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="40.376µs"
	I0708 19:44:54.484179       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="13.785291ms"
	I0708 19:44:54.484256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="36.542µs"
	
	
	==> kube-proxy [3d759a85f805c07da420d8af212a63eda5d55f2bb5bee54df3181714f9cbdbfc] <==
	I0708 19:43:16.632429       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:43:16.666911       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	I0708 19:43:16.760510       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:43:16.760542       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:43:16.760557       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:43:16.765719       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:43:16.765927       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:43:16.765946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:43:16.768262       1 config.go:192] "Starting service config controller"
	I0708 19:43:16.768296       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:43:16.768315       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:43:16.768369       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:43:16.768749       1 config.go:319] "Starting node config controller"
	I0708 19:43:16.768777       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:43:16.870221       1 shared_informer.go:320] Caches are synced for node config
	I0708 19:43:16.870238       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:43:16.870222       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [dc8297c83c2f6827537575adb11f8b1ef407fd238d4b5fca15ed6f5db60e8339] <==
	I0708 19:43:57.136520       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:43:57.147634       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	I0708 19:43:57.204241       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:43:57.204295       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:43:57.204311       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:43:57.207123       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:43:57.207367       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:43:57.207637       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:43:57.208916       1 config.go:192] "Starting service config controller"
	I0708 19:43:57.208951       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:43:57.209034       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:43:57.209054       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:43:57.209618       1 config.go:319] "Starting node config controller"
	I0708 19:43:57.209644       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:43:57.309593       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:43:57.309657       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:43:57.309747       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7e2f3102377551c44f1aa1e792e12a4ece62f9ccb4be567a820ca4b3da662b70] <==
	I0708 19:43:12.537872       1 serving.go:380] Generated self-signed cert in-memory
	W0708 19:43:14.695180       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 19:43:14.695262       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 19:43:14.695277       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 19:43:14.695283       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 19:43:14.767519       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 19:43:14.767685       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:43:14.769286       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 19:43:14.769530       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 19:43:14.769554       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 19:43:14.774884       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 19:43:14.876431       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0708 19:43:41.670847       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cb01c20c64b0ca4468a3d6f00ea95c228dc52e3ca3009e1cd9bdc462e4eb2c49] <==
	I0708 19:43:54.259022       1 serving.go:380] Generated self-signed cert in-memory
	W0708 19:43:56.056428       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 19:43:56.056571       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 19:43:56.056601       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 19:43:56.056679       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 19:43:56.092234       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 19:43:56.093016       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:43:56.102609       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 19:43:56.102652       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 19:43:56.106788       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 19:43:56.106892       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 19:43:56.203220       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 19:49:52 functional-787563 kubelet[4854]: E0708 19:49:52.667415    4854 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:49:52 functional-787563 kubelet[4854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:49:52 functional-787563 kubelet[4854]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:49:52 functional-787563 kubelet[4854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:49:52 functional-787563 kubelet[4854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:50:52 functional-787563 kubelet[4854]: E0708 19:50:52.667766    4854 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:50:52 functional-787563 kubelet[4854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:50:52 functional-787563 kubelet[4854]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:50:52 functional-787563 kubelet[4854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:50:52 functional-787563 kubelet[4854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:51:52 functional-787563 kubelet[4854]: E0708 19:51:52.666502    4854 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:51:52 functional-787563 kubelet[4854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:51:52 functional-787563 kubelet[4854]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:51:52 functional-787563 kubelet[4854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:51:52 functional-787563 kubelet[4854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:52:52 functional-787563 kubelet[4854]: E0708 19:52:52.668544    4854 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:52:52 functional-787563 kubelet[4854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:52:52 functional-787563 kubelet[4854]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:52:52 functional-787563 kubelet[4854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:52:52 functional-787563 kubelet[4854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:53:52 functional-787563 kubelet[4854]: E0708 19:53:52.666583    4854 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:53:52 functional-787563 kubelet[4854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:53:52 functional-787563 kubelet[4854]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:53:52 functional-787563 kubelet[4854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:53:52 functional-787563 kubelet[4854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> kubernetes-dashboard [d34aad32c3a4b47350ce271bb67a4db791f93483fcbd357ac4bda5f023e47bbf] <==
	2024/07/08 19:44:51 Using namespace: kubernetes-dashboard
	2024/07/08 19:44:51 Using in-cluster config to connect to apiserver
	2024/07/08 19:44:51 Using secret token for csrf signing
	2024/07/08 19:44:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/08 19:44:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/08 19:44:51 Successful initial request to the apiserver, version: v1.30.2
	2024/07/08 19:44:51 Generating JWE encryption key
	2024/07/08 19:44:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/08 19:44:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/08 19:44:52 Initializing JWE encryption key from synchronized object
	2024/07/08 19:44:52 Creating in-cluster Sidecar client
	2024/07/08 19:44:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/08 19:44:52 Serving insecurely on HTTP port: 9090
	2024/07/08 19:45:22 Successful request to sidecar
	2024/07/08 19:44:51 Starting overwatch
	
	
	==> storage-provisioner [4c7a560c4e5ec5b7c28ad79761fc33d764386cf750db9587b525b19d8d89660b] <==
	I0708 19:43:16.525153       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 19:43:16.556847       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 19:43:16.556905       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 19:43:33.965064       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 19:43:33.965187       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-787563_119cb85b-bc2d-4b40-9488-e31111e293af!
	I0708 19:43:33.965780       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66b58477-8bca-42ad-995c-fd94c4985ef4", APIVersion:"v1", ResourceVersion:"513", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-787563_119cb85b-bc2d-4b40-9488-e31111e293af became leader
	I0708 19:43:34.066405       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-787563_119cb85b-bc2d-4b40-9488-e31111e293af!
	
	
	==> storage-provisioner [776271f951cf908757333ad39ce6b9d0d640513ee8b48a07d18e3bed1d6337f4] <==
	I0708 19:43:57.045028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 19:43:57.100516       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 19:43:57.101554       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 19:44:14.502295       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 19:44:14.502680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-787563_7983d39e-9935-4184-aee7-154a1cfc73f0!
	I0708 19:44:14.502760       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66b58477-8bca-42ad-995c-fd94c4985ef4", APIVersion:"v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-787563_7983d39e-9935-4184-aee7-154a1cfc73f0 became leader
	I0708 19:44:14.603564       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-787563_7983d39e-9935-4184-aee7-154a1cfc73f0!
	I0708 19:44:31.654516       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0708 19:44:31.656451       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d1b60864-c860-4eb2-bf2b-da6acff107e5 361 0 2024-07-08 19:42:12 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-08 19:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-d8b8101a-f5b2-4dfc-b31d-0bfc7595a1b4 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  d8b8101a-f5b2-4dfc-b31d-0bfc7595a1b4 706 0 2024-07-08 19:44:31 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-08 19:44:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-08 19:44:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0708 19:44:31.658218       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d8b8101a-f5b2-4dfc-b31d-0bfc7595a1b4", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0708 19:44:31.658427       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-d8b8101a-f5b2-4dfc-b31d-0bfc7595a1b4" provisioned
	I0708 19:44:31.658475       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0708 19:44:31.658510       1 volume_store.go:212] Trying to save persistentvolume "pvc-d8b8101a-f5b2-4dfc-b31d-0bfc7595a1b4"
	I0708 19:44:31.673087       1 volume_store.go:219] persistentvolume "pvc-d8b8101a-f5b2-4dfc-b31d-0bfc7595a1b4" saved
	I0708 19:44:31.675809       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d8b8101a-f5b2-4dfc-b31d-0bfc7595a1b4", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-d8b8101a-f5b2-4dfc-b31d-0bfc7595a1b4
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-787563 -n functional-787563
helpers_test.go:261: (dbg) Run:  kubectl --context functional-787563 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-64454c8b5c-m6f4d
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-787563 describe pod busybox-mount mysql-64454c8b5c-m6f4d
helpers_test.go:282: (dbg) kubectl --context functional-787563 describe pod busybox-mount mysql-64454c8b5c-m6f4d:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-787563/192.168.39.54
	Start Time:       Mon, 08 Jul 2024 19:44:39 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e9721bd240114db30c69bb03b78ae79aef767fc535fe74fe7d1cee32c7f77d1e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Jul 2024 19:44:41 +0000
	      Finished:     Mon, 08 Jul 2024 19:44:41 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blkpx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-blkpx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m58s  default-scheduler  Successfully assigned default/busybox-mount to functional-787563
	  Normal  Pulling    9m58s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m57s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.007s (1.007s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m57s  kubelet            Created container mount-munger
	  Normal  Started    9m57s  kubelet            Started container mount-munger
	
	
	Name:             mysql-64454c8b5c-m6f4d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-787563/192.168.39.54
	Start Time:       Mon, 08 Jul 2024 19:44:35 +0000
	Labels:           app=mysql
	                  pod-template-hash=64454c8b5c
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-64454c8b5c
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r64md (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r64md:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/mysql-64454c8b5c-m6f4d to functional-787563

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 node stop m02 -v=7 --alsologtostderr
E0708 19:59:23.843508   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:23.848833   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:23.859191   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:23.879379   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:23.919710   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:24.000051   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:24.160510   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:24.481249   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:25.122210   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:26.403374   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:28.964534   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:34.085416   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 19:59:44.326529   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 20:00:04.807519   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 20:00:45.768691   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.473974876s)

                                                
                                                
-- stdout --
	* Stopping node "ha-511021-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 19:59:02.954527   29651 out.go:291] Setting OutFile to fd 1 ...
	I0708 19:59:02.954688   29651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:59:02.954697   29651 out.go:304] Setting ErrFile to fd 2...
	I0708 19:59:02.954701   29651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:59:02.954895   29651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 19:59:02.955183   29651 mustload.go:65] Loading cluster: ha-511021
	I0708 19:59:02.955603   29651 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:59:02.955621   29651 stop.go:39] StopHost: ha-511021-m02
	I0708 19:59:02.955971   29651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:59:02.956015   29651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:59:02.971346   29651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I0708 19:59:02.971811   29651 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:59:02.972377   29651 main.go:141] libmachine: Using API Version  1
	I0708 19:59:02.972400   29651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:59:02.972702   29651 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:59:02.975111   29651 out.go:177] * Stopping node "ha-511021-m02"  ...
	I0708 19:59:02.976534   29651 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0708 19:59:02.976571   29651 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:59:02.976819   29651 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0708 19:59:02.976855   29651 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:59:02.979667   29651 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:59:02.980147   29651 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:59:02.980175   29651 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:59:02.980351   29651 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:59:02.980532   29651 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:59:02.980734   29651 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:59:02.980887   29651 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	I0708 19:59:03.068814   29651 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0708 19:59:03.122684   29651 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0708 19:59:03.177371   29651 main.go:141] libmachine: Stopping "ha-511021-m02"...
	I0708 19:59:03.177421   29651 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 19:59:03.178880   29651 main.go:141] libmachine: (ha-511021-m02) Calling .Stop
	I0708 19:59:03.181981   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 0/120
	I0708 19:59:04.184064   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 1/120
	I0708 19:59:05.186143   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 2/120
	I0708 19:59:06.187466   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 3/120
	I0708 19:59:07.188799   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 4/120
	I0708 19:59:08.190645   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 5/120
	I0708 19:59:09.191998   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 6/120
	I0708 19:59:10.193400   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 7/120
	I0708 19:59:11.194950   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 8/120
	I0708 19:59:12.196157   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 9/120
	I0708 19:59:13.198367   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 10/120
	I0708 19:59:14.199920   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 11/120
	I0708 19:59:15.202280   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 12/120
	I0708 19:59:16.204363   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 13/120
	I0708 19:59:17.206531   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 14/120
	I0708 19:59:18.208569   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 15/120
	I0708 19:59:19.209991   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 16/120
	I0708 19:59:20.211684   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 17/120
	I0708 19:59:21.212947   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 18/120
	I0708 19:59:22.214510   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 19/120
	I0708 19:59:23.216621   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 20/120
	I0708 19:59:24.217871   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 21/120
	I0708 19:59:25.219149   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 22/120
	I0708 19:59:26.220865   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 23/120
	I0708 19:59:27.222542   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 24/120
	I0708 19:59:28.224556   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 25/120
	I0708 19:59:29.225831   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 26/120
	I0708 19:59:30.227908   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 27/120
	I0708 19:59:31.230144   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 28/120
	I0708 19:59:32.231693   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 29/120
	I0708 19:59:33.234141   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 30/120
	I0708 19:59:34.235648   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 31/120
	I0708 19:59:35.237848   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 32/120
	I0708 19:59:36.239211   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 33/120
	I0708 19:59:37.240605   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 34/120
	I0708 19:59:38.242546   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 35/120
	I0708 19:59:39.244198   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 36/120
	I0708 19:59:40.245828   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 37/120
	I0708 19:59:41.247208   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 38/120
	I0708 19:59:42.248499   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 39/120
	I0708 19:59:43.250626   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 40/120
	I0708 19:59:44.252197   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 41/120
	I0708 19:59:45.253897   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 42/120
	I0708 19:59:46.255652   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 43/120
	I0708 19:59:47.257036   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 44/120
	I0708 19:59:48.259384   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 45/120
	I0708 19:59:49.260814   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 46/120
	I0708 19:59:50.262534   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 47/120
	I0708 19:59:51.263860   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 48/120
	I0708 19:59:52.265987   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 49/120
	I0708 19:59:53.268131   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 50/120
	I0708 19:59:54.270171   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 51/120
	I0708 19:59:55.271383   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 52/120
	I0708 19:59:56.272687   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 53/120
	I0708 19:59:57.274087   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 54/120
	I0708 19:59:58.275978   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 55/120
	I0708 19:59:59.278003   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 56/120
	I0708 20:00:00.279637   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 57/120
	I0708 20:00:01.282012   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 58/120
	I0708 20:00:02.284539   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 59/120
	I0708 20:00:03.285995   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 60/120
	I0708 20:00:04.287694   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 61/120
	I0708 20:00:05.289833   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 62/120
	I0708 20:00:06.291674   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 63/120
	I0708 20:00:07.293910   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 64/120
	I0708 20:00:08.295526   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 65/120
	I0708 20:00:09.297056   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 66/120
	I0708 20:00:10.298235   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 67/120
	I0708 20:00:11.299964   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 68/120
	I0708 20:00:12.301895   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 69/120
	I0708 20:00:13.304160   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 70/120
	I0708 20:00:14.305714   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 71/120
	I0708 20:00:15.307717   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 72/120
	I0708 20:00:16.310001   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 73/120
	I0708 20:00:17.311361   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 74/120
	I0708 20:00:18.313935   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 75/120
	I0708 20:00:19.315346   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 76/120
	I0708 20:00:20.316875   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 77/120
	I0708 20:00:21.318315   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 78/120
	I0708 20:00:22.320058   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 79/120
	I0708 20:00:23.322114   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 80/120
	I0708 20:00:24.323368   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 81/120
	I0708 20:00:25.325272   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 82/120
	I0708 20:00:26.326566   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 83/120
	I0708 20:00:27.328517   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 84/120
	I0708 20:00:28.329981   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 85/120
	I0708 20:00:29.331591   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 86/120
	I0708 20:00:30.332719   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 87/120
	I0708 20:00:31.334248   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 88/120
	I0708 20:00:32.336199   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 89/120
	I0708 20:00:33.338446   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 90/120
	I0708 20:00:34.339884   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 91/120
	I0708 20:00:35.341255   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 92/120
	I0708 20:00:36.342657   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 93/120
	I0708 20:00:37.343993   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 94/120
	I0708 20:00:38.345998   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 95/120
	I0708 20:00:39.347573   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 96/120
	I0708 20:00:40.348737   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 97/120
	I0708 20:00:41.350697   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 98/120
	I0708 20:00:42.352121   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 99/120
	I0708 20:00:43.354435   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 100/120
	I0708 20:00:44.356027   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 101/120
	I0708 20:00:45.358115   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 102/120
	I0708 20:00:46.359837   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 103/120
	I0708 20:00:47.361930   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 104/120
	I0708 20:00:48.363079   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 105/120
	I0708 20:00:49.364583   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 106/120
	I0708 20:00:50.366035   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 107/120
	I0708 20:00:51.367381   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 108/120
	I0708 20:00:52.368673   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 109/120
	I0708 20:00:53.370403   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 110/120
	I0708 20:00:54.371700   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 111/120
	I0708 20:00:55.372880   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 112/120
	I0708 20:00:56.374283   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 113/120
	I0708 20:00:57.375672   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 114/120
	I0708 20:00:58.377515   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 115/120
	I0708 20:00:59.379120   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 116/120
	I0708 20:01:00.380484   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 117/120
	I0708 20:01:01.382419   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 118/120
	I0708 20:01:02.383846   29651 main.go:141] libmachine: (ha-511021-m02) Waiting for machine to stop 119/120
	I0708 20:01:03.385155   29651 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0708 20:01:03.385281   29651 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-511021 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr: exit status 3 (19.172716313s)

                                                
                                                
-- stdout --
	ha-511021
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-511021-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:01:03.428395   30094 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:01:03.428644   30094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:03.428653   30094 out.go:304] Setting ErrFile to fd 2...
	I0708 20:01:03.428657   30094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:03.428869   30094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:01:03.429049   30094 out.go:298] Setting JSON to false
	I0708 20:01:03.429074   30094 mustload.go:65] Loading cluster: ha-511021
	I0708 20:01:03.429210   30094 notify.go:220] Checking for updates...
	I0708 20:01:03.429436   30094 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:01:03.429450   30094 status.go:255] checking status of ha-511021 ...
	I0708 20:01:03.429816   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:03.429866   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:03.449356   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I0708 20:01:03.449842   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:03.450388   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:03.450408   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:03.450783   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:03.451007   30094 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:01:03.452863   30094 status.go:330] ha-511021 host status = "Running" (err=<nil>)
	I0708 20:01:03.452882   30094 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:03.453198   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:03.453239   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:03.469093   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I0708 20:01:03.469441   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:03.469865   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:03.469891   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:03.470219   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:03.470451   30094 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:01:03.473165   30094 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:03.473493   30094 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:03.473517   30094 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:03.473683   30094 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:03.474028   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:03.474083   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:03.489019   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43377
	I0708 20:01:03.489444   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:03.489901   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:03.489925   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:03.490261   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:03.490446   30094 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:01:03.490606   30094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:03.490641   30094 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:01:03.493409   30094 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:03.493862   30094 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:03.493887   30094 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:03.494049   30094 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:01:03.494216   30094 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:01:03.494338   30094 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:01:03.494432   30094 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:01:03.581625   30094 ssh_runner.go:195] Run: systemctl --version
	I0708 20:01:03.589476   30094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:03.608309   30094 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:03.608353   30094 api_server.go:166] Checking apiserver status ...
	I0708 20:01:03.608396   30094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:03.625723   30094 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0708 20:01:03.637081   30094 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:03.637141   30094 ssh_runner.go:195] Run: ls
	I0708 20:01:03.642158   30094 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:03.648353   30094 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:03.648373   30094 status.go:422] ha-511021 apiserver status = Running (err=<nil>)
	I0708 20:01:03.648394   30094 status.go:257] ha-511021 status: &{Name:ha-511021 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:03.648410   30094 status.go:255] checking status of ha-511021-m02 ...
	I0708 20:01:03.648680   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:03.648710   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:03.663635   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I0708 20:01:03.664070   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:03.664543   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:03.664565   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:03.664913   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:03.665098   30094 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 20:01:03.666879   30094 status.go:330] ha-511021-m02 host status = "Running" (err=<nil>)
	I0708 20:01:03.666897   30094 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:03.667185   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:03.667222   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:03.683243   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I0708 20:01:03.683698   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:03.684155   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:03.684177   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:03.684466   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:03.684669   30094 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 20:01:03.687504   30094 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:03.687955   30094 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:03.687981   30094 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:03.688113   30094 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:03.688405   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:03.688442   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:03.703623   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0708 20:01:03.704088   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:03.704597   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:03.704616   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:03.704917   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:03.705116   30094 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 20:01:03.705318   30094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:03.705336   30094 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 20:01:03.708037   30094 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:03.708471   30094 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:03.708508   30094 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:03.708673   30094 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 20:01:03.708848   30094 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 20:01:03.708997   30094 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 20:01:03.709141   30094 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	W0708 20:01:22.179636   30094 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.216:22: connect: no route to host
	W0708 20:01:22.179732   30094 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	E0708 20:01:22.179746   30094 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:22.179755   30094 status.go:257] ha-511021-m02 status: &{Name:ha-511021-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0708 20:01:22.179770   30094 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:22.179778   30094 status.go:255] checking status of ha-511021-m03 ...
	I0708 20:01:22.180093   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:22.180145   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:22.194906   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0708 20:01:22.195329   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:22.195830   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:22.195850   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:22.196210   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:22.196366   30094 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 20:01:22.198328   30094 status.go:330] ha-511021-m03 host status = "Running" (err=<nil>)
	I0708 20:01:22.198390   30094 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:22.198809   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:22.198865   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:22.213218   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43617
	I0708 20:01:22.213651   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:22.214122   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:22.214140   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:22.214502   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:22.214659   30094 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 20:01:22.217780   30094 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:22.218215   30094 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:22.218243   30094 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:22.218359   30094 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:22.218646   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:22.218679   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:22.235334   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0708 20:01:22.235735   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:22.236211   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:22.236230   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:22.236552   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:22.236793   30094 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 20:01:22.236966   30094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:22.236987   30094 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 20:01:22.240090   30094 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:22.240560   30094 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:22.240583   30094 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:22.240739   30094 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 20:01:22.240915   30094 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 20:01:22.241070   30094 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 20:01:22.241195   30094 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 20:01:22.331056   30094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:22.350717   30094 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:22.350743   30094 api_server.go:166] Checking apiserver status ...
	I0708 20:01:22.350777   30094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:22.369562   30094 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0708 20:01:22.381791   30094 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:22.381841   30094 ssh_runner.go:195] Run: ls
	I0708 20:01:22.387566   30094 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:22.393701   30094 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:22.393729   30094 status.go:422] ha-511021-m03 apiserver status = Running (err=<nil>)
	I0708 20:01:22.393739   30094 status.go:257] ha-511021-m03 status: &{Name:ha-511021-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:22.393759   30094 status.go:255] checking status of ha-511021-m04 ...
	I0708 20:01:22.394046   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:22.394078   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:22.409174   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0708 20:01:22.409649   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:22.410129   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:22.410153   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:22.410459   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:22.410628   30094 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:01:22.412617   30094 status.go:330] ha-511021-m04 host status = "Running" (err=<nil>)
	I0708 20:01:22.412634   30094 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:22.413031   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:22.413073   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:22.428400   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
	I0708 20:01:22.428787   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:22.429270   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:22.429288   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:22.429624   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:22.429824   30094 main.go:141] libmachine: (ha-511021-m04) Calling .GetIP
	I0708 20:01:22.432926   30094 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:22.433378   30094 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:22.433403   30094 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:22.433535   30094 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:22.433924   30094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:22.433970   30094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:22.449057   30094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0708 20:01:22.449431   30094 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:22.449852   30094 main.go:141] libmachine: Using API Version  1
	I0708 20:01:22.449873   30094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:22.450149   30094 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:22.450320   30094 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:01:22.450469   30094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:22.450490   30094 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:01:22.453425   30094 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:22.453871   30094 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:22.453893   30094 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:22.454090   30094 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:01:22.454257   30094 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:01:22.454516   30094 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:01:22.454664   30094 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	I0708 20:01:22.540353   30094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:22.558036   30094 status.go:257] ha-511021-m04 status: &{Name:ha-511021-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-511021 -n ha-511021
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-511021 logs -n 25: (1.444744014s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3985602198/001/cp-test_ha-511021-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021:/home/docker/cp-test_ha-511021-m03_ha-511021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021 sudo cat                                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /home/docker/cp-test_ha-511021-m03_ha-511021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m02:/home/docker/cp-test_ha-511021-m03_ha-511021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m02 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /home/docker/cp-test_ha-511021-m03_ha-511021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m04:/home/docker/cp-test_ha-511021-m03_ha-511021-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m04 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /home/docker/cp-test_ha-511021-m03_ha-511021-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp testdata/cp-test.txt                                                | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3985602198/001/cp-test_ha-511021-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021:/home/docker/cp-test_ha-511021-m04_ha-511021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021 sudo cat                                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m02:/home/docker/cp-test_ha-511021-m04_ha-511021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m02 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m03:/home/docker/cp-test_ha-511021-m04_ha-511021-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m03 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-511021 node stop m02 -v=7                                                     | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 19:54:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 19:54:39.652390   25689 out.go:291] Setting OutFile to fd 1 ...
	I0708 19:54:39.652659   25689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:54:39.652671   25689 out.go:304] Setting ErrFile to fd 2...
	I0708 19:54:39.652677   25689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:54:39.652870   25689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 19:54:39.653519   25689 out.go:298] Setting JSON to false
	I0708 19:54:39.654338   25689 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2229,"bootTime":1720466251,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 19:54:39.654396   25689 start.go:139] virtualization: kvm guest
	I0708 19:54:39.656698   25689 out.go:177] * [ha-511021] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 19:54:39.657932   25689 notify.go:220] Checking for updates...
	I0708 19:54:39.657980   25689 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 19:54:39.659140   25689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 19:54:39.660520   25689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:54:39.661710   25689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:54:39.662958   25689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 19:54:39.664711   25689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 19:54:39.666004   25689 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 19:54:39.701610   25689 out.go:177] * Using the kvm2 driver based on user configuration
	I0708 19:54:39.702810   25689 start.go:297] selected driver: kvm2
	I0708 19:54:39.702827   25689 start.go:901] validating driver "kvm2" against <nil>
	I0708 19:54:39.702840   25689 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 19:54:39.703890   25689 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 19:54:39.703985   25689 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 19:54:39.718945   25689 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 19:54:39.718993   25689 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 19:54:39.719197   25689 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 19:54:39.719266   25689 cni.go:84] Creating CNI manager for ""
	I0708 19:54:39.719279   25689 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0708 19:54:39.719291   25689 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0708 19:54:39.719341   25689 start.go:340] cluster config:
	{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0708 19:54:39.719431   25689 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 19:54:39.722110   25689 out.go:177] * Starting "ha-511021" primary control-plane node in "ha-511021" cluster
	I0708 19:54:39.723356   25689 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:54:39.723392   25689 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 19:54:39.723400   25689 cache.go:56] Caching tarball of preloaded images
	I0708 19:54:39.723499   25689 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 19:54:39.723511   25689 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 19:54:39.723791   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:54:39.723826   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json: {Name:mk652d8bac760778730093f451bc96812e92f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:54:39.723958   25689 start.go:360] acquireMachinesLock for ha-511021: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 19:54:39.723985   25689 start.go:364] duration metric: took 14.37µs to acquireMachinesLock for "ha-511021"
	I0708 19:54:39.724000   25689 start.go:93] Provisioning new machine with config: &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:54:39.724058   25689 start.go:125] createHost starting for "" (driver="kvm2")
	I0708 19:54:39.725645   25689 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 19:54:39.725765   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:54:39.725808   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:54:39.740444   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40161
	I0708 19:54:39.740875   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:54:39.741475   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:54:39.741494   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:54:39.741786   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:54:39.741961   25689 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 19:54:39.742100   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:54:39.742233   25689 start.go:159] libmachine.API.Create for "ha-511021" (driver="kvm2")
	I0708 19:54:39.742260   25689 client.go:168] LocalClient.Create starting
	I0708 19:54:39.742291   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 19:54:39.742320   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:54:39.742333   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:54:39.742388   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 19:54:39.742411   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:54:39.742424   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:54:39.742441   25689 main.go:141] libmachine: Running pre-create checks...
	I0708 19:54:39.742449   25689 main.go:141] libmachine: (ha-511021) Calling .PreCreateCheck
	I0708 19:54:39.742750   25689 main.go:141] libmachine: (ha-511021) Calling .GetConfigRaw
	I0708 19:54:39.743090   25689 main.go:141] libmachine: Creating machine...
	I0708 19:54:39.743102   25689 main.go:141] libmachine: (ha-511021) Calling .Create
	I0708 19:54:39.743227   25689 main.go:141] libmachine: (ha-511021) Creating KVM machine...
	I0708 19:54:39.744373   25689 main.go:141] libmachine: (ha-511021) DBG | found existing default KVM network
	I0708 19:54:39.745003   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:39.744885   25712 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0708 19:54:39.745055   25689 main.go:141] libmachine: (ha-511021) DBG | created network xml: 
	I0708 19:54:39.745063   25689 main.go:141] libmachine: (ha-511021) DBG | <network>
	I0708 19:54:39.745069   25689 main.go:141] libmachine: (ha-511021) DBG |   <name>mk-ha-511021</name>
	I0708 19:54:39.745078   25689 main.go:141] libmachine: (ha-511021) DBG |   <dns enable='no'/>
	I0708 19:54:39.745089   25689 main.go:141] libmachine: (ha-511021) DBG |   
	I0708 19:54:39.745096   25689 main.go:141] libmachine: (ha-511021) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0708 19:54:39.745102   25689 main.go:141] libmachine: (ha-511021) DBG |     <dhcp>
	I0708 19:54:39.745109   25689 main.go:141] libmachine: (ha-511021) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0708 19:54:39.745114   25689 main.go:141] libmachine: (ha-511021) DBG |     </dhcp>
	I0708 19:54:39.745121   25689 main.go:141] libmachine: (ha-511021) DBG |   </ip>
	I0708 19:54:39.745126   25689 main.go:141] libmachine: (ha-511021) DBG |   
	I0708 19:54:39.745130   25689 main.go:141] libmachine: (ha-511021) DBG | </network>
	I0708 19:54:39.745135   25689 main.go:141] libmachine: (ha-511021) DBG | 
	I0708 19:54:39.750050   25689 main.go:141] libmachine: (ha-511021) DBG | trying to create private KVM network mk-ha-511021 192.168.39.0/24...
	I0708 19:54:39.816264   25689 main.go:141] libmachine: (ha-511021) DBG | private KVM network mk-ha-511021 192.168.39.0/24 created
	I0708 19:54:39.816296   25689 main.go:141] libmachine: (ha-511021) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021 ...
	I0708 19:54:39.816312   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:39.816258   25712 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:54:39.816333   25689 main.go:141] libmachine: (ha-511021) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 19:54:39.816446   25689 main.go:141] libmachine: (ha-511021) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 19:54:40.045141   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:40.045024   25712 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa...
	I0708 19:54:40.177060   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:40.176940   25712 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/ha-511021.rawdisk...
	I0708 19:54:40.177087   25689 main.go:141] libmachine: (ha-511021) DBG | Writing magic tar header
	I0708 19:54:40.177100   25689 main.go:141] libmachine: (ha-511021) DBG | Writing SSH key tar header
	I0708 19:54:40.177107   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:40.177071   25712 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021 ...
	I0708 19:54:40.177185   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021
	I0708 19:54:40.177228   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021 (perms=drwx------)
	I0708 19:54:40.177238   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 19:54:40.177252   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:54:40.177263   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 19:54:40.177274   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 19:54:40.177287   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 19:54:40.177297   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 19:54:40.177304   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 19:54:40.177314   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 19:54:40.177323   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins
	I0708 19:54:40.177342   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home
	I0708 19:54:40.177357   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 19:54:40.177366   25689 main.go:141] libmachine: (ha-511021) DBG | Skipping /home - not owner
	I0708 19:54:40.177374   25689 main.go:141] libmachine: (ha-511021) Creating domain...
	I0708 19:54:40.178547   25689 main.go:141] libmachine: (ha-511021) define libvirt domain using xml: 
	I0708 19:54:40.178572   25689 main.go:141] libmachine: (ha-511021) <domain type='kvm'>
	I0708 19:54:40.178595   25689 main.go:141] libmachine: (ha-511021)   <name>ha-511021</name>
	I0708 19:54:40.178609   25689 main.go:141] libmachine: (ha-511021)   <memory unit='MiB'>2200</memory>
	I0708 19:54:40.178618   25689 main.go:141] libmachine: (ha-511021)   <vcpu>2</vcpu>
	I0708 19:54:40.178627   25689 main.go:141] libmachine: (ha-511021)   <features>
	I0708 19:54:40.178634   25689 main.go:141] libmachine: (ha-511021)     <acpi/>
	I0708 19:54:40.178638   25689 main.go:141] libmachine: (ha-511021)     <apic/>
	I0708 19:54:40.178643   25689 main.go:141] libmachine: (ha-511021)     <pae/>
	I0708 19:54:40.178654   25689 main.go:141] libmachine: (ha-511021)     
	I0708 19:54:40.178658   25689 main.go:141] libmachine: (ha-511021)   </features>
	I0708 19:54:40.178663   25689 main.go:141] libmachine: (ha-511021)   <cpu mode='host-passthrough'>
	I0708 19:54:40.178670   25689 main.go:141] libmachine: (ha-511021)   
	I0708 19:54:40.178674   25689 main.go:141] libmachine: (ha-511021)   </cpu>
	I0708 19:54:40.178679   25689 main.go:141] libmachine: (ha-511021)   <os>
	I0708 19:54:40.178684   25689 main.go:141] libmachine: (ha-511021)     <type>hvm</type>
	I0708 19:54:40.178689   25689 main.go:141] libmachine: (ha-511021)     <boot dev='cdrom'/>
	I0708 19:54:40.178693   25689 main.go:141] libmachine: (ha-511021)     <boot dev='hd'/>
	I0708 19:54:40.178741   25689 main.go:141] libmachine: (ha-511021)     <bootmenu enable='no'/>
	I0708 19:54:40.178763   25689 main.go:141] libmachine: (ha-511021)   </os>
	I0708 19:54:40.178774   25689 main.go:141] libmachine: (ha-511021)   <devices>
	I0708 19:54:40.178788   25689 main.go:141] libmachine: (ha-511021)     <disk type='file' device='cdrom'>
	I0708 19:54:40.178805   25689 main.go:141] libmachine: (ha-511021)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/boot2docker.iso'/>
	I0708 19:54:40.178816   25689 main.go:141] libmachine: (ha-511021)       <target dev='hdc' bus='scsi'/>
	I0708 19:54:40.178822   25689 main.go:141] libmachine: (ha-511021)       <readonly/>
	I0708 19:54:40.178829   25689 main.go:141] libmachine: (ha-511021)     </disk>
	I0708 19:54:40.178835   25689 main.go:141] libmachine: (ha-511021)     <disk type='file' device='disk'>
	I0708 19:54:40.178844   25689 main.go:141] libmachine: (ha-511021)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 19:54:40.178856   25689 main.go:141] libmachine: (ha-511021)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/ha-511021.rawdisk'/>
	I0708 19:54:40.178871   25689 main.go:141] libmachine: (ha-511021)       <target dev='hda' bus='virtio'/>
	I0708 19:54:40.178883   25689 main.go:141] libmachine: (ha-511021)     </disk>
	I0708 19:54:40.178892   25689 main.go:141] libmachine: (ha-511021)     <interface type='network'>
	I0708 19:54:40.178901   25689 main.go:141] libmachine: (ha-511021)       <source network='mk-ha-511021'/>
	I0708 19:54:40.178911   25689 main.go:141] libmachine: (ha-511021)       <model type='virtio'/>
	I0708 19:54:40.178918   25689 main.go:141] libmachine: (ha-511021)     </interface>
	I0708 19:54:40.178927   25689 main.go:141] libmachine: (ha-511021)     <interface type='network'>
	I0708 19:54:40.178944   25689 main.go:141] libmachine: (ha-511021)       <source network='default'/>
	I0708 19:54:40.178958   25689 main.go:141] libmachine: (ha-511021)       <model type='virtio'/>
	I0708 19:54:40.178971   25689 main.go:141] libmachine: (ha-511021)     </interface>
	I0708 19:54:40.178981   25689 main.go:141] libmachine: (ha-511021)     <serial type='pty'>
	I0708 19:54:40.178990   25689 main.go:141] libmachine: (ha-511021)       <target port='0'/>
	I0708 19:54:40.178997   25689 main.go:141] libmachine: (ha-511021)     </serial>
	I0708 19:54:40.179009   25689 main.go:141] libmachine: (ha-511021)     <console type='pty'>
	I0708 19:54:40.179019   25689 main.go:141] libmachine: (ha-511021)       <target type='serial' port='0'/>
	I0708 19:54:40.179039   25689 main.go:141] libmachine: (ha-511021)     </console>
	I0708 19:54:40.179058   25689 main.go:141] libmachine: (ha-511021)     <rng model='virtio'>
	I0708 19:54:40.179069   25689 main.go:141] libmachine: (ha-511021)       <backend model='random'>/dev/random</backend>
	I0708 19:54:40.179076   25689 main.go:141] libmachine: (ha-511021)     </rng>
	I0708 19:54:40.179096   25689 main.go:141] libmachine: (ha-511021)     
	I0708 19:54:40.179104   25689 main.go:141] libmachine: (ha-511021)     
	I0708 19:54:40.179109   25689 main.go:141] libmachine: (ha-511021)   </devices>
	I0708 19:54:40.179112   25689 main.go:141] libmachine: (ha-511021) </domain>
	I0708 19:54:40.179120   25689 main.go:141] libmachine: (ha-511021) 
	I0708 19:54:40.183577   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:68:53:2b in network default
	I0708 19:54:40.184048   25689 main.go:141] libmachine: (ha-511021) Ensuring networks are active...
	I0708 19:54:40.184062   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:40.184725   25689 main.go:141] libmachine: (ha-511021) Ensuring network default is active
	I0708 19:54:40.184920   25689 main.go:141] libmachine: (ha-511021) Ensuring network mk-ha-511021 is active
	I0708 19:54:40.185353   25689 main.go:141] libmachine: (ha-511021) Getting domain xml...
	I0708 19:54:40.185973   25689 main.go:141] libmachine: (ha-511021) Creating domain...
	I0708 19:54:41.366987   25689 main.go:141] libmachine: (ha-511021) Waiting to get IP...
	I0708 19:54:41.367752   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:41.368118   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:41.368146   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:41.368093   25712 retry.go:31] will retry after 263.500393ms: waiting for machine to come up
	I0708 19:54:41.635094   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:41.635654   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:41.635684   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:41.635587   25712 retry.go:31] will retry after 349.843209ms: waiting for machine to come up
	I0708 19:54:41.987220   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:41.987653   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:41.987679   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:41.987609   25712 retry.go:31] will retry after 367.765084ms: waiting for machine to come up
	I0708 19:54:42.357171   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:42.357540   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:42.357566   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:42.357495   25712 retry.go:31] will retry after 460.024411ms: waiting for machine to come up
	I0708 19:54:42.819139   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:42.819478   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:42.819502   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:42.819417   25712 retry.go:31] will retry after 747.974264ms: waiting for machine to come up
	I0708 19:54:43.569274   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:43.569664   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:43.569688   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:43.569626   25712 retry.go:31] will retry after 651.085668ms: waiting for machine to come up
	I0708 19:54:44.222296   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:44.222750   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:44.222777   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:44.222704   25712 retry.go:31] will retry after 959.305664ms: waiting for machine to come up
	I0708 19:54:45.183309   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:45.183677   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:45.183706   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:45.183669   25712 retry.go:31] will retry after 1.142334131s: waiting for machine to come up
	I0708 19:54:46.327888   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:46.328221   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:46.328241   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:46.328175   25712 retry.go:31] will retry after 1.319661086s: waiting for machine to come up
	I0708 19:54:47.649728   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:47.650122   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:47.650141   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:47.650084   25712 retry.go:31] will retry after 1.664166267s: waiting for machine to come up
	I0708 19:54:49.315484   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:49.315912   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:49.315946   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:49.315857   25712 retry.go:31] will retry after 2.828162199s: waiting for machine to come up
	I0708 19:54:52.146523   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:52.146907   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:52.146941   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:52.146890   25712 retry.go:31] will retry after 3.36474102s: waiting for machine to come up
	I0708 19:54:55.512873   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:55.513261   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:55.513283   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:55.513208   25712 retry.go:31] will retry after 3.879896256s: waiting for machine to come up
	I0708 19:54:59.397113   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.397526   25689 main.go:141] libmachine: (ha-511021) Found IP for machine: 192.168.39.33
	I0708 19:54:59.397555   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has current primary IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.397564   25689 main.go:141] libmachine: (ha-511021) Reserving static IP address...
	I0708 19:54:59.397902   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find host DHCP lease matching {name: "ha-511021", mac: "52:54:00:fe:1e:ad", ip: "192.168.39.33"} in network mk-ha-511021
	I0708 19:54:59.470686   25689 main.go:141] libmachine: (ha-511021) DBG | Getting to WaitForSSH function...
	I0708 19:54:59.470713   25689 main.go:141] libmachine: (ha-511021) Reserved static IP address: 192.168.39.33
	I0708 19:54:59.470736   25689 main.go:141] libmachine: (ha-511021) Waiting for SSH to be available...
	I0708 19:54:59.473464   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.473834   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:54:59.473868   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.473954   25689 main.go:141] libmachine: (ha-511021) DBG | Using SSH client type: external
	I0708 19:54:59.473992   25689 main.go:141] libmachine: (ha-511021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa (-rw-------)
	I0708 19:54:59.474032   25689 main.go:141] libmachine: (ha-511021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 19:54:59.474054   25689 main.go:141] libmachine: (ha-511021) DBG | About to run SSH command:
	I0708 19:54:59.474067   25689 main.go:141] libmachine: (ha-511021) DBG | exit 0
	I0708 19:54:59.600057   25689 main.go:141] libmachine: (ha-511021) DBG | SSH cmd err, output: <nil>: 
	I0708 19:54:59.600395   25689 main.go:141] libmachine: (ha-511021) KVM machine creation complete!
	I0708 19:54:59.600702   25689 main.go:141] libmachine: (ha-511021) Calling .GetConfigRaw
	I0708 19:54:59.601244   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:54:59.601479   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:54:59.601690   25689 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 19:54:59.601718   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:54:59.603230   25689 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 19:54:59.603244   25689 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 19:54:59.603249   25689 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 19:54:59.603255   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:54:59.605670   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.606032   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:54:59.606067   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.606237   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:54:59.606446   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.606666   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.606834   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:54:59.606990   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:54:59.607203   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:54:59.607218   25689 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 19:54:59.714903   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:54:59.714921   25689 main.go:141] libmachine: Detecting the provisioner...
	I0708 19:54:59.714929   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:54:59.717832   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.718207   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:54:59.718249   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.718390   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:54:59.718593   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.718742   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.718844   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:54:59.718988   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:54:59.719185   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:54:59.719200   25689 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 19:54:59.828555   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 19:54:59.828612   25689 main.go:141] libmachine: found compatible host: buildroot
	I0708 19:54:59.828619   25689 main.go:141] libmachine: Provisioning with buildroot...
	I0708 19:54:59.828626   25689 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 19:54:59.828872   25689 buildroot.go:166] provisioning hostname "ha-511021"
	I0708 19:54:59.828891   25689 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 19:54:59.829084   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:54:59.831701   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.832072   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:54:59.832098   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.832244   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:54:59.832565   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.832721   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.832857   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:54:59.833015   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:54:59.833200   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:54:59.833215   25689 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-511021 && echo "ha-511021" | sudo tee /etc/hostname
	I0708 19:54:59.954208   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-511021
	
	I0708 19:54:59.954240   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:54:59.957219   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.957536   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:54:59.957566   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.957747   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:54:59.957959   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.958145   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.958310   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:54:59.958455   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:54:59.958649   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:54:59.958672   25689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-511021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-511021/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-511021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 19:55:00.073351   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:55:00.073377   25689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 19:55:00.073414   25689 buildroot.go:174] setting up certificates
	I0708 19:55:00.073439   25689 provision.go:84] configureAuth start
	I0708 19:55:00.073451   25689 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 19:55:00.073731   25689 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 19:55:00.076659   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.077115   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.077139   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.077391   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.079629   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.080022   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.080068   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.080210   25689 provision.go:143] copyHostCerts
	I0708 19:55:00.080241   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:55:00.080299   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 19:55:00.080310   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:55:00.080377   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 19:55:00.080452   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:55:00.080474   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 19:55:00.080481   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:55:00.080504   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 19:55:00.080547   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:55:00.080562   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 19:55:00.080568   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:55:00.080587   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 19:55:00.080635   25689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.ha-511021 san=[127.0.0.1 192.168.39.33 ha-511021 localhost minikube]
	I0708 19:55:00.264734   25689 provision.go:177] copyRemoteCerts
	I0708 19:55:00.264785   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 19:55:00.264806   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.267804   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.268185   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.268214   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.268450   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.268651   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.268828   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.268965   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:00.354041   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 19:55:00.354113   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 19:55:00.380126   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 19:55:00.380202   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0708 19:55:00.406408   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 19:55:00.406474   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 19:55:00.434875   25689 provision.go:87] duration metric: took 361.421634ms to configureAuth
	I0708 19:55:00.434902   25689 buildroot.go:189] setting minikube options for container-runtime
	I0708 19:55:00.435106   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:55:00.435203   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.437630   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.437884   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.437909   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.438066   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.438261   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.438445   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.438605   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.438746   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:00.438926   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:55:00.438949   25689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 19:55:00.709587   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 19:55:00.709617   25689 main.go:141] libmachine: Checking connection to Docker...
	I0708 19:55:00.709625   25689 main.go:141] libmachine: (ha-511021) Calling .GetURL
	I0708 19:55:00.710958   25689 main.go:141] libmachine: (ha-511021) DBG | Using libvirt version 6000000
	I0708 19:55:00.712974   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.713254   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.713274   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.713456   25689 main.go:141] libmachine: Docker is up and running!
	I0708 19:55:00.713469   25689 main.go:141] libmachine: Reticulating splines...
	I0708 19:55:00.713477   25689 client.go:171] duration metric: took 20.97120701s to LocalClient.Create
	I0708 19:55:00.713502   25689 start.go:167] duration metric: took 20.971270107s to libmachine.API.Create "ha-511021"
	I0708 19:55:00.713514   25689 start.go:293] postStartSetup for "ha-511021" (driver="kvm2")
	I0708 19:55:00.713526   25689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 19:55:00.713558   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:00.713770   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 19:55:00.713790   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.715882   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.716236   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.716255   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.716435   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.716616   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.716806   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.716940   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:00.802288   25689 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 19:55:00.806405   25689 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 19:55:00.806428   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 19:55:00.806492   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 19:55:00.806594   25689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 19:55:00.806607   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /etc/ssl/certs/131412.pem
	I0708 19:55:00.806723   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 19:55:00.816350   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:55:00.841111   25689 start.go:296] duration metric: took 127.584278ms for postStartSetup
	I0708 19:55:00.841154   25689 main.go:141] libmachine: (ha-511021) Calling .GetConfigRaw
	I0708 19:55:00.841827   25689 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 19:55:00.844230   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.844540   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.844567   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.844773   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:55:00.844938   25689 start.go:128] duration metric: took 21.120872101s to createHost
	I0708 19:55:00.844959   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.846861   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.847129   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.847154   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.847287   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.847492   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.847648   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.847780   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.847916   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:00.848078   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:55:00.848087   25689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 19:55:00.956499   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720468500.928643392
	
	I0708 19:55:00.956531   25689 fix.go:216] guest clock: 1720468500.928643392
	I0708 19:55:00.956539   25689 fix.go:229] Guest: 2024-07-08 19:55:00.928643392 +0000 UTC Remote: 2024-07-08 19:55:00.844949642 +0000 UTC m=+21.230644795 (delta=83.69375ms)
	I0708 19:55:00.956574   25689 fix.go:200] guest clock delta is within tolerance: 83.69375ms
	I0708 19:55:00.956587   25689 start.go:83] releasing machines lock for "ha-511021", held for 21.232586521s
	I0708 19:55:00.956608   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:00.956859   25689 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 19:55:00.959369   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.959802   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.959831   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.959990   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:00.960466   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:00.960617   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:00.960673   25689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 19:55:00.960713   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.960823   25689 ssh_runner.go:195] Run: cat /version.json
	I0708 19:55:00.960846   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.963523   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.963751   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.963849   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.963877   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.964000   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.964148   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.964168   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.964226   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.964347   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.964356   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.964476   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:00.964502   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.964624   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.964748   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:01.041150   25689 ssh_runner.go:195] Run: systemctl --version
	I0708 19:55:01.065415   25689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 19:55:01.226264   25689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 19:55:01.233290   25689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 19:55:01.233360   25689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 19:55:01.250592   25689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 19:55:01.250619   25689 start.go:494] detecting cgroup driver to use...
	I0708 19:55:01.250704   25689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 19:55:01.270164   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 19:55:01.285178   25689 docker.go:217] disabling cri-docker service (if available) ...
	I0708 19:55:01.285251   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 19:55:01.299973   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 19:55:01.314671   25689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 19:55:01.429194   25689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 19:55:01.598542   25689 docker.go:233] disabling docker service ...
	I0708 19:55:01.598602   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 19:55:01.614109   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 19:55:01.627759   25689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 19:55:01.769835   25689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 19:55:01.899695   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 19:55:01.914521   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 19:55:01.934549   25689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 19:55:01.934617   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:01.946357   25689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 19:55:01.946430   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:01.958494   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:01.971068   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:01.983335   25689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 19:55:01.995240   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:02.006738   25689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:02.024510   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:02.036171   25689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 19:55:02.046861   25689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 19:55:02.046946   25689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 19:55:02.062314   25689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 19:55:02.073111   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:55:02.189279   25689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 19:55:02.323848   25689 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 19:55:02.323929   25689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 19:55:02.328847   25689 start.go:562] Will wait 60s for crictl version
	I0708 19:55:02.328911   25689 ssh_runner.go:195] Run: which crictl
	I0708 19:55:02.332927   25689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 19:55:02.377418   25689 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 19:55:02.377489   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:55:02.406746   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:55:02.439026   25689 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 19:55:02.440298   25689 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 19:55:02.442945   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:02.443243   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:02.443267   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:02.443553   25689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 19:55:02.448030   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:55:02.461988   25689 kubeadm.go:877] updating cluster {Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 19:55:02.462085   25689 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:55:02.462131   25689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 19:55:02.497382   25689 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 19:55:02.497456   25689 ssh_runner.go:195] Run: which lz4
	I0708 19:55:02.501498   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0708 19:55:02.501585   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 19:55:02.506177   25689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 19:55:02.506207   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 19:55:03.961052   25689 crio.go:462] duration metric: took 1.459490708s to copy over tarball
	I0708 19:55:03.961131   25689 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 19:55:06.114665   25689 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.15350267s)
	I0708 19:55:06.114696   25689 crio.go:469] duration metric: took 2.153618785s to extract the tarball
	I0708 19:55:06.114703   25689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 19:55:06.153351   25689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 19:55:06.202758   25689 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 19:55:06.202780   25689 cache_images.go:84] Images are preloaded, skipping loading
	I0708 19:55:06.202789   25689 kubeadm.go:928] updating node { 192.168.39.33 8443 v1.30.2 crio true true} ...
	I0708 19:55:06.202902   25689 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-511021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 19:55:06.202965   25689 ssh_runner.go:195] Run: crio config
	I0708 19:55:06.250069   25689 cni.go:84] Creating CNI manager for ""
	I0708 19:55:06.250085   25689 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 19:55:06.250093   25689 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 19:55:06.250111   25689 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.33 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-511021 NodeName:ha-511021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 19:55:06.250280   25689 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-511021"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 19:55:06.250303   25689 kube-vip.go:115] generating kube-vip config ...
	I0708 19:55:06.250349   25689 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0708 19:55:06.269168   25689 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0708 19:55:06.269284   25689 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0708 19:55:06.269345   25689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 19:55:06.279384   25689 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 19:55:06.279475   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0708 19:55:06.289203   25689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0708 19:55:06.306698   25689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 19:55:06.324526   25689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0708 19:55:06.341335   25689 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0708 19:55:06.358722   25689 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0708 19:55:06.362943   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:55:06.376102   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:55:06.492892   25689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:55:06.510981   25689 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021 for IP: 192.168.39.33
	I0708 19:55:06.511007   25689 certs.go:194] generating shared ca certs ...
	I0708 19:55:06.511022   25689 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:06.511192   25689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 19:55:06.511248   25689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 19:55:06.511263   25689 certs.go:256] generating profile certs ...
	I0708 19:55:06.511331   25689 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key
	I0708 19:55:06.511355   25689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt with IP's: []
	I0708 19:55:06.695699   25689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt ...
	I0708 19:55:06.695728   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt: {Name:mke97764dd135ab9d0e1fc55099f96d1b806e54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:06.695921   25689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key ...
	I0708 19:55:06.695936   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key: {Name:mk53c15aa980b0692c0d4c2e27e159704091483b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:06.696035   25689 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.655f3ec0
	I0708 19:55:06.696051   25689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.655f3ec0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.33 192.168.39.254]
	I0708 19:55:06.853818   25689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.655f3ec0 ...
	I0708 19:55:06.853848   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.655f3ec0: {Name:mke1c560140d2b33b7839a6aaf663f5c37079bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:06.854036   25689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.655f3ec0 ...
	I0708 19:55:06.854052   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.655f3ec0: {Name:mkf58a0bcd6873684c72bf33352fced7876fdfac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:06.854146   25689 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.655f3ec0 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt
	I0708 19:55:06.854241   25689 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.655f3ec0 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key
	I0708 19:55:06.854301   25689 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key
	I0708 19:55:06.854316   25689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt with IP's: []
	I0708 19:55:07.356523   25689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt ...
	I0708 19:55:07.356553   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt: {Name:mk88bac3c3c9852133ee72c0b6f05a2a984c8dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:07.356710   25689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key ...
	I0708 19:55:07.356721   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key: {Name:mkd63a74860318e3b37978b8c4c8682a51f4eea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:07.356785   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 19:55:07.356802   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 19:55:07.356812   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 19:55:07.356825   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 19:55:07.356837   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 19:55:07.356849   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 19:55:07.356862   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 19:55:07.356873   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 19:55:07.356923   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 19:55:07.356956   25689 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 19:55:07.356965   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 19:55:07.356986   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 19:55:07.357008   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 19:55:07.357028   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 19:55:07.357062   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:55:07.357088   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:07.357102   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem -> /usr/share/ca-certificates/13141.pem
	I0708 19:55:07.357116   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /usr/share/ca-certificates/131412.pem
	I0708 19:55:07.357613   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 19:55:07.394108   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 19:55:07.423235   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 19:55:07.450103   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 19:55:07.480148   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 19:55:07.508796   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 19:55:07.533501   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 19:55:07.559283   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 19:55:07.584004   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 19:55:07.608704   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 19:55:07.635038   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 19:55:07.660483   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 19:55:07.678814   25689 ssh_runner.go:195] Run: openssl version
	I0708 19:55:07.685231   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 19:55:07.697337   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 19:55:07.702100   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 19:55:07.702171   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 19:55:07.708383   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 19:55:07.720214   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 19:55:07.732125   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 19:55:07.736877   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 19:55:07.736930   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 19:55:07.742747   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 19:55:07.754534   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 19:55:07.766389   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:07.771122   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:07.771183   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:07.777309   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 19:55:07.789376   25689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 19:55:07.793896   25689 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 19:55:07.793952   25689 kubeadm.go:391] StartCluster: {Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:55:07.794053   25689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 19:55:07.794108   25689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 19:55:07.837758   25689 cri.go:89] found id: ""
	I0708 19:55:07.837819   25689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 19:55:07.848649   25689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 19:55:07.859432   25689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 19:55:07.871520   25689 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 19:55:07.871543   25689 kubeadm.go:156] found existing configuration files:
	
	I0708 19:55:07.871588   25689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 19:55:07.882400   25689 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 19:55:07.882463   25689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 19:55:07.893059   25689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 19:55:07.903472   25689 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 19:55:07.903534   25689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 19:55:07.914821   25689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 19:55:07.925118   25689 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 19:55:07.925171   25689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 19:55:07.935632   25689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 19:55:07.945862   25689 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 19:55:07.945919   25689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 19:55:07.957001   25689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 19:55:08.064510   25689 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 19:55:08.064591   25689 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 19:55:08.220059   25689 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 19:55:08.220151   25689 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 19:55:08.220237   25689 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 19:55:08.424551   25689 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 19:55:08.449925   25689 out.go:204]   - Generating certificates and keys ...
	I0708 19:55:08.450055   25689 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 19:55:08.450141   25689 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 19:55:08.613766   25689 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 19:55:08.811113   25689 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 19:55:08.979231   25689 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 19:55:09.093594   25689 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 19:55:09.323369   25689 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 19:55:09.323626   25689 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-511021 localhost] and IPs [192.168.39.33 127.0.0.1 ::1]
	I0708 19:55:09.668270   25689 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 19:55:09.668543   25689 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-511021 localhost] and IPs [192.168.39.33 127.0.0.1 ::1]
	I0708 19:55:09.737094   25689 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 19:55:09.938904   25689 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 19:55:10.056296   25689 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 19:55:10.056385   25689 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 19:55:10.229973   25689 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 19:55:10.438458   25689 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 19:55:10.585166   25689 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 19:55:10.735716   25689 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 19:55:10.888057   25689 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 19:55:10.889454   25689 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 19:55:10.893265   25689 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 19:55:10.895315   25689 out.go:204]   - Booting up control plane ...
	I0708 19:55:10.895422   25689 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 19:55:10.895544   25689 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 19:55:10.895631   25689 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 19:55:10.912031   25689 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 19:55:10.913120   25689 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 19:55:10.913185   25689 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 19:55:11.057049   25689 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 19:55:11.057160   25689 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 19:55:11.555851   25689 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.246575ms
	I0708 19:55:11.555972   25689 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 19:55:18.055410   25689 kubeadm.go:309] [api-check] The API server is healthy after 6.503225834s
	I0708 19:55:18.076650   25689 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 19:55:18.100189   25689 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 19:55:18.132982   25689 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 19:55:18.133239   25689 kubeadm.go:309] [mark-control-plane] Marking the node ha-511021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 19:55:18.149997   25689 kubeadm.go:309] [bootstrap-token] Using token: fnvqsi.ql5n6lfkoy8q2zw7
	I0708 19:55:18.151537   25689 out.go:204]   - Configuring RBAC rules ...
	I0708 19:55:18.151630   25689 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 19:55:18.156761   25689 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 19:55:18.169164   25689 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 19:55:18.176351   25689 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 19:55:18.180247   25689 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 19:55:18.183990   25689 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 19:55:18.465664   25689 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 19:55:18.913327   25689 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 19:55:19.465637   25689 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 19:55:19.466620   25689 kubeadm.go:309] 
	I0708 19:55:19.466682   25689 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 19:55:19.466687   25689 kubeadm.go:309] 
	I0708 19:55:19.466754   25689 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 19:55:19.466761   25689 kubeadm.go:309] 
	I0708 19:55:19.466787   25689 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 19:55:19.466856   25689 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 19:55:19.466916   25689 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 19:55:19.466924   25689 kubeadm.go:309] 
	I0708 19:55:19.467008   25689 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 19:55:19.467041   25689 kubeadm.go:309] 
	I0708 19:55:19.467116   25689 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 19:55:19.467135   25689 kubeadm.go:309] 
	I0708 19:55:19.467211   25689 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 19:55:19.467272   25689 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 19:55:19.467332   25689 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 19:55:19.467338   25689 kubeadm.go:309] 
	I0708 19:55:19.467424   25689 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 19:55:19.467505   25689 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 19:55:19.467517   25689 kubeadm.go:309] 
	I0708 19:55:19.467633   25689 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fnvqsi.ql5n6lfkoy8q2zw7 \
	I0708 19:55:19.467774   25689 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 19:55:19.467802   25689 kubeadm.go:309] 	--control-plane 
	I0708 19:55:19.467813   25689 kubeadm.go:309] 
	I0708 19:55:19.467935   25689 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 19:55:19.467947   25689 kubeadm.go:309] 
	I0708 19:55:19.468058   25689 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fnvqsi.ql5n6lfkoy8q2zw7 \
	I0708 19:55:19.468204   25689 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 19:55:19.468641   25689 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 19:55:19.468710   25689 cni.go:84] Creating CNI manager for ""
	I0708 19:55:19.468723   25689 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 19:55:19.470694   25689 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0708 19:55:19.472019   25689 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0708 19:55:19.478275   25689 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0708 19:55:19.478292   25689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0708 19:55:19.503581   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0708 19:55:19.867546   25689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 19:55:19.867644   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:19.867644   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-511021 minikube.k8s.io/updated_at=2024_07_08T19_55_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=ha-511021 minikube.k8s.io/primary=true
	I0708 19:55:19.888879   25689 ops.go:34] apiserver oom_adj: -16
	I0708 19:55:20.082005   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:20.583085   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:21.082145   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:21.582700   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:22.082462   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:22.582940   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:23.082590   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:23.582423   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:24.082781   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:24.582976   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:25.083032   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:25.582674   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:26.083052   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:26.582821   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:27.082293   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:27.583072   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:28.082385   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:28.582339   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:29.082683   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:29.582048   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:30.082107   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:30.582938   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:31.082831   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:31.583050   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:31.715376   25689 kubeadm.go:1107] duration metric: took 11.847797877s to wait for elevateKubeSystemPrivileges
	W0708 19:55:31.715411   25689 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 19:55:31.715420   25689 kubeadm.go:393] duration metric: took 23.921473775s to StartCluster
	I0708 19:55:31.715439   25689 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:31.715531   25689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:55:31.716201   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:31.716405   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 19:55:31.716415   25689 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:55:31.716431   25689 start.go:240] waiting for startup goroutines ...
	I0708 19:55:31.716440   25689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 19:55:31.716484   25689 addons.go:69] Setting storage-provisioner=true in profile "ha-511021"
	I0708 19:55:31.716504   25689 addons.go:69] Setting default-storageclass=true in profile "ha-511021"
	I0708 19:55:31.716514   25689 addons.go:234] Setting addon storage-provisioner=true in "ha-511021"
	I0708 19:55:31.716535   25689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-511021"
	I0708 19:55:31.716544   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:55:31.716675   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:55:31.716895   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:31.716924   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:31.716951   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:31.716994   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:31.733136   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45125
	I0708 19:55:31.733170   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0708 19:55:31.733675   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:31.733704   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:31.734218   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:31.734243   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:31.734221   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:31.734260   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:31.734559   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:31.734563   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:31.734739   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:55:31.735211   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:31.735251   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:31.737036   25689 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:55:31.737389   25689 kapi.go:59] client config for ha-511021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key", CAFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfdf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 19:55:31.737955   25689 cert_rotation.go:137] Starting client certificate rotation controller
	I0708 19:55:31.738268   25689 addons.go:234] Setting addon default-storageclass=true in "ha-511021"
	I0708 19:55:31.738309   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:55:31.738676   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:31.738706   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:31.751038   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0708 19:55:31.751496   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:31.752011   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:31.752027   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:31.752371   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:31.752578   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:55:31.754405   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:31.754938   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I0708 19:55:31.755281   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:31.755878   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:31.755895   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:31.756190   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:31.756602   25689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 19:55:31.756759   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:31.756810   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:31.758256   25689 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 19:55:31.758272   25689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 19:55:31.758287   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:31.761244   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:31.761607   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:31.761621   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:31.761906   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:31.762098   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:31.762231   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:31.762351   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:31.772637   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36585
	I0708 19:55:31.773059   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:31.773538   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:31.773573   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:31.773981   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:31.774151   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:55:31.775990   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:31.776256   25689 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 19:55:31.776270   25689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 19:55:31.776286   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:31.778844   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:31.779189   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:31.779213   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:31.779430   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:31.779612   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:31.779755   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:31.779968   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:31.877499   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0708 19:55:31.941292   25689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 19:55:31.978237   25689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 19:55:32.400116   25689 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0708 19:55:32.651893   25689 main.go:141] libmachine: Making call to close driver server
	I0708 19:55:32.651915   25689 main.go:141] libmachine: (ha-511021) Calling .Close
	I0708 19:55:32.651946   25689 main.go:141] libmachine: Making call to close driver server
	I0708 19:55:32.651962   25689 main.go:141] libmachine: (ha-511021) Calling .Close
	I0708 19:55:32.652197   25689 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:55:32.652220   25689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:55:32.652230   25689 main.go:141] libmachine: Making call to close driver server
	I0708 19:55:32.652238   25689 main.go:141] libmachine: (ha-511021) Calling .Close
	I0708 19:55:32.652256   25689 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:55:32.652267   25689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:55:32.652308   25689 main.go:141] libmachine: Making call to close driver server
	I0708 19:55:32.652320   25689 main.go:141] libmachine: (ha-511021) Calling .Close
	I0708 19:55:32.652272   25689 main.go:141] libmachine: (ha-511021) DBG | Closing plugin on server side
	I0708 19:55:32.652420   25689 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:55:32.652436   25689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:55:32.652572   25689 main.go:141] libmachine: (ha-511021) DBG | Closing plugin on server side
	I0708 19:55:32.652610   25689 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:55:32.652622   25689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:55:32.652793   25689 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0708 19:55:32.652804   25689 round_trippers.go:469] Request Headers:
	I0708 19:55:32.652815   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:55:32.652821   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:55:32.690663   25689 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0708 19:55:32.691193   25689 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0708 19:55:32.691207   25689 round_trippers.go:469] Request Headers:
	I0708 19:55:32.691214   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:55:32.691218   25689 round_trippers.go:473]     Content-Type: application/json
	I0708 19:55:32.691220   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:55:32.697943   25689 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0708 19:55:32.698112   25689 main.go:141] libmachine: Making call to close driver server
	I0708 19:55:32.698131   25689 main.go:141] libmachine: (ha-511021) Calling .Close
	I0708 19:55:32.698428   25689 main.go:141] libmachine: (ha-511021) DBG | Closing plugin on server side
	I0708 19:55:32.698433   25689 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:55:32.698448   25689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:55:32.700491   25689 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0708 19:55:32.701862   25689 addons.go:510] duration metric: took 985.416008ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0708 19:55:32.701904   25689 start.go:245] waiting for cluster config update ...
	I0708 19:55:32.701920   25689 start.go:254] writing updated cluster config ...
	I0708 19:55:32.703577   25689 out.go:177] 
	I0708 19:55:32.705002   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:55:32.705068   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:55:32.706898   25689 out.go:177] * Starting "ha-511021-m02" control-plane node in "ha-511021" cluster
	I0708 19:55:32.708244   25689 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:55:32.708274   25689 cache.go:56] Caching tarball of preloaded images
	I0708 19:55:32.708364   25689 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 19:55:32.708375   25689 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 19:55:32.708451   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:55:32.708627   25689 start.go:360] acquireMachinesLock for ha-511021-m02: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 19:55:32.708670   25689 start.go:364] duration metric: took 22.327µs to acquireMachinesLock for "ha-511021-m02"
	I0708 19:55:32.708687   25689 start.go:93] Provisioning new machine with config: &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:55:32.708746   25689 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0708 19:55:32.710542   25689 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 19:55:32.710630   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:32.710656   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:32.725807   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I0708 19:55:32.726396   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:32.726875   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:32.726893   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:32.727241   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:32.727494   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetMachineName
	I0708 19:55:32.727674   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:32.727842   25689 start.go:159] libmachine.API.Create for "ha-511021" (driver="kvm2")
	I0708 19:55:32.727867   25689 client.go:168] LocalClient.Create starting
	I0708 19:55:32.727903   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 19:55:32.727944   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:55:32.727967   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:55:32.728033   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 19:55:32.728060   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:55:32.728076   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:55:32.728103   25689 main.go:141] libmachine: Running pre-create checks...
	I0708 19:55:32.728114   25689 main.go:141] libmachine: (ha-511021-m02) Calling .PreCreateCheck
	I0708 19:55:32.728349   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetConfigRaw
	I0708 19:55:32.728846   25689 main.go:141] libmachine: Creating machine...
	I0708 19:55:32.728874   25689 main.go:141] libmachine: (ha-511021-m02) Calling .Create
	I0708 19:55:32.729003   25689 main.go:141] libmachine: (ha-511021-m02) Creating KVM machine...
	I0708 19:55:32.730206   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found existing default KVM network
	I0708 19:55:32.730327   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found existing private KVM network mk-ha-511021
	I0708 19:55:32.730448   25689 main.go:141] libmachine: (ha-511021-m02) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02 ...
	I0708 19:55:32.730467   25689 main.go:141] libmachine: (ha-511021-m02) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 19:55:32.730518   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:32.730429   26079 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:55:32.730617   25689 main.go:141] libmachine: (ha-511021-m02) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 19:55:32.948539   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:32.948390   26079 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa...
	I0708 19:55:33.237905   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:33.237782   26079 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/ha-511021-m02.rawdisk...
	I0708 19:55:33.237935   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Writing magic tar header
	I0708 19:55:33.237946   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Writing SSH key tar header
	I0708 19:55:33.237958   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:33.237893   26079 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02 ...
	I0708 19:55:33.237974   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02
	I0708 19:55:33.238025   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02 (perms=drwx------)
	I0708 19:55:33.238051   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 19:55:33.238064   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 19:55:33.238084   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 19:55:33.238118   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 19:55:33.238167   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 19:55:33.238187   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 19:55:33.238195   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:55:33.238216   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 19:55:33.238235   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 19:55:33.238247   25689 main.go:141] libmachine: (ha-511021-m02) Creating domain...
	I0708 19:55:33.238264   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins
	I0708 19:55:33.238277   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home
	I0708 19:55:33.238291   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Skipping /home - not owner
	I0708 19:55:33.239070   25689 main.go:141] libmachine: (ha-511021-m02) define libvirt domain using xml: 
	I0708 19:55:33.239090   25689 main.go:141] libmachine: (ha-511021-m02) <domain type='kvm'>
	I0708 19:55:33.239097   25689 main.go:141] libmachine: (ha-511021-m02)   <name>ha-511021-m02</name>
	I0708 19:55:33.239103   25689 main.go:141] libmachine: (ha-511021-m02)   <memory unit='MiB'>2200</memory>
	I0708 19:55:33.239115   25689 main.go:141] libmachine: (ha-511021-m02)   <vcpu>2</vcpu>
	I0708 19:55:33.239125   25689 main.go:141] libmachine: (ha-511021-m02)   <features>
	I0708 19:55:33.239152   25689 main.go:141] libmachine: (ha-511021-m02)     <acpi/>
	I0708 19:55:33.239174   25689 main.go:141] libmachine: (ha-511021-m02)     <apic/>
	I0708 19:55:33.239198   25689 main.go:141] libmachine: (ha-511021-m02)     <pae/>
	I0708 19:55:33.239209   25689 main.go:141] libmachine: (ha-511021-m02)     
	I0708 19:55:33.239220   25689 main.go:141] libmachine: (ha-511021-m02)   </features>
	I0708 19:55:33.239231   25689 main.go:141] libmachine: (ha-511021-m02)   <cpu mode='host-passthrough'>
	I0708 19:55:33.239241   25689 main.go:141] libmachine: (ha-511021-m02)   
	I0708 19:55:33.239251   25689 main.go:141] libmachine: (ha-511021-m02)   </cpu>
	I0708 19:55:33.239261   25689 main.go:141] libmachine: (ha-511021-m02)   <os>
	I0708 19:55:33.239272   25689 main.go:141] libmachine: (ha-511021-m02)     <type>hvm</type>
	I0708 19:55:33.239285   25689 main.go:141] libmachine: (ha-511021-m02)     <boot dev='cdrom'/>
	I0708 19:55:33.239296   25689 main.go:141] libmachine: (ha-511021-m02)     <boot dev='hd'/>
	I0708 19:55:33.239321   25689 main.go:141] libmachine: (ha-511021-m02)     <bootmenu enable='no'/>
	I0708 19:55:33.239343   25689 main.go:141] libmachine: (ha-511021-m02)   </os>
	I0708 19:55:33.239365   25689 main.go:141] libmachine: (ha-511021-m02)   <devices>
	I0708 19:55:33.239384   25689 main.go:141] libmachine: (ha-511021-m02)     <disk type='file' device='cdrom'>
	I0708 19:55:33.239395   25689 main.go:141] libmachine: (ha-511021-m02)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/boot2docker.iso'/>
	I0708 19:55:33.239401   25689 main.go:141] libmachine: (ha-511021-m02)       <target dev='hdc' bus='scsi'/>
	I0708 19:55:33.239407   25689 main.go:141] libmachine: (ha-511021-m02)       <readonly/>
	I0708 19:55:33.239422   25689 main.go:141] libmachine: (ha-511021-m02)     </disk>
	I0708 19:55:33.239430   25689 main.go:141] libmachine: (ha-511021-m02)     <disk type='file' device='disk'>
	I0708 19:55:33.239436   25689 main.go:141] libmachine: (ha-511021-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 19:55:33.239459   25689 main.go:141] libmachine: (ha-511021-m02)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/ha-511021-m02.rawdisk'/>
	I0708 19:55:33.239476   25689 main.go:141] libmachine: (ha-511021-m02)       <target dev='hda' bus='virtio'/>
	I0708 19:55:33.239487   25689 main.go:141] libmachine: (ha-511021-m02)     </disk>
	I0708 19:55:33.239497   25689 main.go:141] libmachine: (ha-511021-m02)     <interface type='network'>
	I0708 19:55:33.239509   25689 main.go:141] libmachine: (ha-511021-m02)       <source network='mk-ha-511021'/>
	I0708 19:55:33.239517   25689 main.go:141] libmachine: (ha-511021-m02)       <model type='virtio'/>
	I0708 19:55:33.239523   25689 main.go:141] libmachine: (ha-511021-m02)     </interface>
	I0708 19:55:33.239531   25689 main.go:141] libmachine: (ha-511021-m02)     <interface type='network'>
	I0708 19:55:33.239539   25689 main.go:141] libmachine: (ha-511021-m02)       <source network='default'/>
	I0708 19:55:33.239544   25689 main.go:141] libmachine: (ha-511021-m02)       <model type='virtio'/>
	I0708 19:55:33.239551   25689 main.go:141] libmachine: (ha-511021-m02)     </interface>
	I0708 19:55:33.239556   25689 main.go:141] libmachine: (ha-511021-m02)     <serial type='pty'>
	I0708 19:55:33.239563   25689 main.go:141] libmachine: (ha-511021-m02)       <target port='0'/>
	I0708 19:55:33.239567   25689 main.go:141] libmachine: (ha-511021-m02)     </serial>
	I0708 19:55:33.239580   25689 main.go:141] libmachine: (ha-511021-m02)     <console type='pty'>
	I0708 19:55:33.239586   25689 main.go:141] libmachine: (ha-511021-m02)       <target type='serial' port='0'/>
	I0708 19:55:33.239611   25689 main.go:141] libmachine: (ha-511021-m02)     </console>
	I0708 19:55:33.239632   25689 main.go:141] libmachine: (ha-511021-m02)     <rng model='virtio'>
	I0708 19:55:33.239646   25689 main.go:141] libmachine: (ha-511021-m02)       <backend model='random'>/dev/random</backend>
	I0708 19:55:33.239656   25689 main.go:141] libmachine: (ha-511021-m02)     </rng>
	I0708 19:55:33.239666   25689 main.go:141] libmachine: (ha-511021-m02)     
	I0708 19:55:33.239679   25689 main.go:141] libmachine: (ha-511021-m02)     
	I0708 19:55:33.239693   25689 main.go:141] libmachine: (ha-511021-m02)   </devices>
	I0708 19:55:33.239706   25689 main.go:141] libmachine: (ha-511021-m02) </domain>
	I0708 19:55:33.239719   25689 main.go:141] libmachine: (ha-511021-m02) 
	I0708 19:55:33.245823   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:bf:22:4c in network default
	I0708 19:55:33.246371   25689 main.go:141] libmachine: (ha-511021-m02) Ensuring networks are active...
	I0708 19:55:33.246404   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:33.247042   25689 main.go:141] libmachine: (ha-511021-m02) Ensuring network default is active
	I0708 19:55:33.247361   25689 main.go:141] libmachine: (ha-511021-m02) Ensuring network mk-ha-511021 is active
	I0708 19:55:33.247774   25689 main.go:141] libmachine: (ha-511021-m02) Getting domain xml...
	I0708 19:55:33.248434   25689 main.go:141] libmachine: (ha-511021-m02) Creating domain...
	I0708 19:55:34.477510   25689 main.go:141] libmachine: (ha-511021-m02) Waiting to get IP...
	I0708 19:55:34.478237   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:34.478640   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:34.478669   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:34.478627   26079 retry.go:31] will retry after 281.543718ms: waiting for machine to come up
	I0708 19:55:34.762270   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:34.762710   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:34.762738   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:34.762654   26079 retry.go:31] will retry after 382.724475ms: waiting for machine to come up
	I0708 19:55:35.147285   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:35.147774   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:35.147804   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:35.147726   26079 retry.go:31] will retry after 448.924672ms: waiting for machine to come up
	I0708 19:55:35.598552   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:35.598959   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:35.598987   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:35.598907   26079 retry.go:31] will retry after 526.749552ms: waiting for machine to come up
	I0708 19:55:36.127207   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:36.127692   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:36.127720   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:36.127664   26079 retry.go:31] will retry after 750.455986ms: waiting for machine to come up
	I0708 19:55:36.879870   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:36.880300   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:36.880341   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:36.880201   26079 retry.go:31] will retry after 665.309052ms: waiting for machine to come up
	I0708 19:55:37.547443   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:37.547843   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:37.547864   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:37.547830   26079 retry.go:31] will retry after 1.158507742s: waiting for machine to come up
	I0708 19:55:38.707853   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:38.708312   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:38.708337   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:38.708275   26079 retry.go:31] will retry after 1.226996776s: waiting for machine to come up
	I0708 19:55:39.937245   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:39.937745   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:39.937766   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:39.937687   26079 retry.go:31] will retry after 1.502146373s: waiting for machine to come up
	I0708 19:55:41.442564   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:41.443048   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:41.443077   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:41.442996   26079 retry.go:31] will retry after 2.11023787s: waiting for machine to come up
	I0708 19:55:43.555301   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:43.555850   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:43.555876   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:43.555807   26079 retry.go:31] will retry after 2.54569276s: waiting for machine to come up
	I0708 19:55:46.102861   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:46.103212   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:46.103238   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:46.103171   26079 retry.go:31] will retry after 3.061209639s: waiting for machine to come up
	I0708 19:55:49.166252   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:49.166583   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:49.166614   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:49.166576   26079 retry.go:31] will retry after 3.099576885s: waiting for machine to come up
	I0708 19:55:52.268760   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:52.269272   25689 main.go:141] libmachine: (ha-511021-m02) Found IP for machine: 192.168.39.216
	I0708 19:55:52.269291   25689 main.go:141] libmachine: (ha-511021-m02) Reserving static IP address...
	I0708 19:55:52.269306   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has current primary IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:52.269564   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find host DHCP lease matching {name: "ha-511021-m02", mac: "52:54:00:e2:dd:87", ip: "192.168.39.216"} in network mk-ha-511021
	I0708 19:55:52.342989   25689 main.go:141] libmachine: (ha-511021-m02) Reserved static IP address: 192.168.39.216
	I0708 19:55:52.343020   25689 main.go:141] libmachine: (ha-511021-m02) Waiting for SSH to be available...
	I0708 19:55:52.343030   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Getting to WaitForSSH function...
	I0708 19:55:52.345518   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:52.345938   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021
	I0708 19:55:52.345965   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find defined IP address of network mk-ha-511021 interface with MAC address 52:54:00:e2:dd:87
	I0708 19:55:52.346099   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Using SSH client type: external
	I0708 19:55:52.346121   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa (-rw-------)
	I0708 19:55:52.346154   25689 main.go:141] libmachine: (ha-511021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 19:55:52.346176   25689 main.go:141] libmachine: (ha-511021-m02) DBG | About to run SSH command:
	I0708 19:55:52.346194   25689 main.go:141] libmachine: (ha-511021-m02) DBG | exit 0
	I0708 19:55:52.350040   25689 main.go:141] libmachine: (ha-511021-m02) DBG | SSH cmd err, output: exit status 255: 
	I0708 19:55:52.350066   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0708 19:55:52.350074   25689 main.go:141] libmachine: (ha-511021-m02) DBG | command : exit 0
	I0708 19:55:52.350079   25689 main.go:141] libmachine: (ha-511021-m02) DBG | err     : exit status 255
	I0708 19:55:52.350086   25689 main.go:141] libmachine: (ha-511021-m02) DBG | output  : 
	I0708 19:55:55.351658   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Getting to WaitForSSH function...
	I0708 19:55:55.354551   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.355109   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.355138   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.355291   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Using SSH client type: external
	I0708 19:55:55.355315   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa (-rw-------)
	I0708 19:55:55.355345   25689 main.go:141] libmachine: (ha-511021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 19:55:55.355399   25689 main.go:141] libmachine: (ha-511021-m02) DBG | About to run SSH command:
	I0708 19:55:55.355444   25689 main.go:141] libmachine: (ha-511021-m02) DBG | exit 0
	I0708 19:55:55.483834   25689 main.go:141] libmachine: (ha-511021-m02) DBG | SSH cmd err, output: <nil>: 
	I0708 19:55:55.484110   25689 main.go:141] libmachine: (ha-511021-m02) KVM machine creation complete!
	I0708 19:55:55.484422   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetConfigRaw
	I0708 19:55:55.484928   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:55.485123   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:55.485307   25689 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 19:55:55.485321   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 19:55:55.486524   25689 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 19:55:55.486535   25689 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 19:55:55.486550   25689 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 19:55:55.486555   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:55.488949   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.489308   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.489328   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.489479   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:55.489703   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.489856   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.490033   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:55.490204   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:55.490437   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:55.490453   25689 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 19:55:55.602951   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:55:55.603071   25689 main.go:141] libmachine: Detecting the provisioner...
	I0708 19:55:55.603084   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:55.606101   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.606461   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.606490   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.606683   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:55.606878   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.607053   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.607176   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:55.607333   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:55.607533   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:55.607544   25689 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 19:55:55.716596   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 19:55:55.716657   25689 main.go:141] libmachine: found compatible host: buildroot
	I0708 19:55:55.716663   25689 main.go:141] libmachine: Provisioning with buildroot...
	I0708 19:55:55.716670   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetMachineName
	I0708 19:55:55.716915   25689 buildroot.go:166] provisioning hostname "ha-511021-m02"
	I0708 19:55:55.716939   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetMachineName
	I0708 19:55:55.717138   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:55.720201   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.720658   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.720686   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.720844   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:55.721029   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.721216   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.721362   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:55.721511   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:55.721666   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:55.721679   25689 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-511021-m02 && echo "ha-511021-m02" | sudo tee /etc/hostname
	I0708 19:55:55.844716   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-511021-m02
	
	I0708 19:55:55.844746   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:55.847576   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.847887   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.847914   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.848059   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:55.848261   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.848455   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.848604   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:55.848797   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:55.848990   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:55.849007   25689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-511021-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-511021-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-511021-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 19:55:55.969354   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:55:55.969382   25689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 19:55:55.969402   25689 buildroot.go:174] setting up certificates
	I0708 19:55:55.969413   25689 provision.go:84] configureAuth start
	I0708 19:55:55.969425   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetMachineName
	I0708 19:55:55.969705   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 19:55:55.972586   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.972945   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.972971   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.973133   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:55.975163   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.975556   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.975583   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.975725   25689 provision.go:143] copyHostCerts
	I0708 19:55:55.975757   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:55:55.975790   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 19:55:55.975799   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:55:55.975875   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 19:55:55.975962   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:55:55.975991   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 19:55:55.976002   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:55:55.976046   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 19:55:55.976121   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:55:55.976140   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 19:55:55.976148   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:55:55.976179   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 19:55:55.976237   25689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.ha-511021-m02 san=[127.0.0.1 192.168.39.216 ha-511021-m02 localhost minikube]
	I0708 19:55:56.146290   25689 provision.go:177] copyRemoteCerts
	I0708 19:55:56.146342   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 19:55:56.146364   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:56.148906   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.149248   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.149275   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.149468   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.149676   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.149828   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.149959   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	I0708 19:55:56.234581   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 19:55:56.234654   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 19:55:56.261328   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 19:55:56.261397   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0708 19:55:56.286817   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 19:55:56.286879   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 19:55:56.312274   25689 provision.go:87] duration metric: took 342.848931ms to configureAuth
	I0708 19:55:56.312317   25689 buildroot.go:189] setting minikube options for container-runtime
	I0708 19:55:56.312508   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:55:56.312590   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:56.315095   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.315418   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.315466   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.315698   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.315888   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.316056   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.316202   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.316345   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:56.316512   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:56.316533   25689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 19:55:56.591734   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 19:55:56.591765   25689 main.go:141] libmachine: Checking connection to Docker...
	I0708 19:55:56.591775   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetURL
	I0708 19:55:56.592816   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Using libvirt version 6000000
	I0708 19:55:56.595154   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.595493   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.595520   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.595692   25689 main.go:141] libmachine: Docker is up and running!
	I0708 19:55:56.595705   25689 main.go:141] libmachine: Reticulating splines...
	I0708 19:55:56.595712   25689 client.go:171] duration metric: took 23.867837165s to LocalClient.Create
	I0708 19:55:56.595731   25689 start.go:167] duration metric: took 23.867892319s to libmachine.API.Create "ha-511021"
	I0708 19:55:56.595739   25689 start.go:293] postStartSetup for "ha-511021-m02" (driver="kvm2")
	I0708 19:55:56.595748   25689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 19:55:56.595763   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:56.595978   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 19:55:56.595999   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:56.598010   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.598339   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.598354   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.598468   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.598632   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.598764   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.598920   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	I0708 19:55:56.686415   25689 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 19:55:56.690968   25689 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 19:55:56.690991   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 19:55:56.691053   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 19:55:56.691119   25689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 19:55:56.691128   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /etc/ssl/certs/131412.pem
	I0708 19:55:56.691207   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 19:55:56.701758   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:55:56.727119   25689 start.go:296] duration metric: took 131.369772ms for postStartSetup
	I0708 19:55:56.727159   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetConfigRaw
	I0708 19:55:56.727721   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 19:55:56.730082   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.730451   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.730476   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.730688   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:55:56.730871   25689 start.go:128] duration metric: took 24.022115297s to createHost
	I0708 19:55:56.730894   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:56.733156   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.733472   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.733496   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.733647   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.733812   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.733975   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.734096   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.734248   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:56.734452   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:56.734468   25689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 19:55:56.844723   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720468556.821051446
	
	I0708 19:55:56.844747   25689 fix.go:216] guest clock: 1720468556.821051446
	I0708 19:55:56.844757   25689 fix.go:229] Guest: 2024-07-08 19:55:56.821051446 +0000 UTC Remote: 2024-07-08 19:55:56.730882592 +0000 UTC m=+77.116577746 (delta=90.168854ms)
	I0708 19:55:56.844777   25689 fix.go:200] guest clock delta is within tolerance: 90.168854ms
	I0708 19:55:56.844784   25689 start.go:83] releasing machines lock for "ha-511021-m02", held for 24.136104006s
	I0708 19:55:56.844807   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:56.845081   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 19:55:56.847788   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.848120   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.848140   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.850423   25689 out.go:177] * Found network options:
	I0708 19:55:56.851805   25689 out.go:177]   - NO_PROXY=192.168.39.33
	W0708 19:55:56.853006   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	I0708 19:55:56.853031   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:56.853591   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:56.853768   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:56.853858   25689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 19:55:56.853897   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	W0708 19:55:56.853961   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	I0708 19:55:56.854032   25689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 19:55:56.854054   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:56.856550   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.856730   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.856903   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.856930   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.857098   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.857104   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.857126   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.857311   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.857331   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.857504   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.857511   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.857661   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.857660   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	I0708 19:55:56.857810   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	I0708 19:55:57.093354   25689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 19:55:57.100070   25689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 19:55:57.100166   25689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 19:55:57.116867   25689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 19:55:57.116898   25689 start.go:494] detecting cgroup driver to use...
	I0708 19:55:57.116969   25689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 19:55:57.135272   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 19:55:57.152746   25689 docker.go:217] disabling cri-docker service (if available) ...
	I0708 19:55:57.152806   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 19:55:57.169544   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 19:55:57.184676   25689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 19:55:57.306676   25689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 19:55:57.455741   25689 docker.go:233] disabling docker service ...
	I0708 19:55:57.455814   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 19:55:57.471241   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 19:55:57.484940   25689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 19:55:57.625933   25689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 19:55:57.749504   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 19:55:57.763929   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 19:55:57.783042   25689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 19:55:57.783100   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.793433   25689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 19:55:57.793498   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.803935   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.814024   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.824385   25689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 19:55:57.835327   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.846638   25689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.864310   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.875470   25689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 19:55:57.885159   25689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 19:55:57.885230   25689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 19:55:57.899496   25689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 19:55:57.909743   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:55:58.039190   25689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 19:55:58.180523   25689 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 19:55:58.180599   25689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 19:55:58.185712   25689 start.go:562] Will wait 60s for crictl version
	I0708 19:55:58.185775   25689 ssh_runner.go:195] Run: which crictl
	I0708 19:55:58.189767   25689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 19:55:58.230255   25689 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 19:55:58.230350   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:55:58.259882   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:55:58.291237   25689 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 19:55:58.293111   25689 out.go:177]   - env NO_PROXY=192.168.39.33
	I0708 19:55:58.294387   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 19:55:58.297301   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:58.297612   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:58.297640   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:58.297811   25689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 19:55:58.301994   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:55:58.314364   25689 mustload.go:65] Loading cluster: ha-511021
	I0708 19:55:58.314543   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:55:58.314774   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:58.314799   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:58.329140   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39711
	I0708 19:55:58.329526   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:58.329966   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:58.329982   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:58.330513   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:58.330705   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:55:58.332263   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:55:58.332541   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:58.332570   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:58.348547   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39075
	I0708 19:55:58.348930   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:58.349354   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:58.349373   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:58.349658   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:58.349842   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:58.349980   25689 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021 for IP: 192.168.39.216
	I0708 19:55:58.349992   25689 certs.go:194] generating shared ca certs ...
	I0708 19:55:58.350010   25689 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:58.350149   25689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 19:55:58.350205   25689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 19:55:58.350219   25689 certs.go:256] generating profile certs ...
	I0708 19:55:58.350404   25689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key
	I0708 19:55:58.350442   25689 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.9d499452
	I0708 19:55:58.350462   25689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.9d499452 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.33 192.168.39.216 192.168.39.254]
	I0708 19:55:58.488883   25689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.9d499452 ...
	I0708 19:55:58.488912   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.9d499452: {Name:mke2c1acf56b5fe06b7700caff32ef7d088bced9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:58.489077   25689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.9d499452 ...
	I0708 19:55:58.489092   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.9d499452: {Name:mk25c9e786a144c25fe333b8e79bf36398614c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:58.489158   25689 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.9d499452 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt
	I0708 19:55:58.489281   25689 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.9d499452 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key
	I0708 19:55:58.489398   25689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key
	I0708 19:55:58.489412   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 19:55:58.489424   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 19:55:58.489434   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 19:55:58.489444   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 19:55:58.489456   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 19:55:58.489466   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 19:55:58.489477   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 19:55:58.489486   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 19:55:58.489557   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 19:55:58.489589   25689 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 19:55:58.489598   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 19:55:58.489618   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 19:55:58.489639   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 19:55:58.489661   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 19:55:58.489702   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:55:58.489729   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem -> /usr/share/ca-certificates/13141.pem
	I0708 19:55:58.489742   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /usr/share/ca-certificates/131412.pem
	I0708 19:55:58.489754   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:58.489782   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:58.492795   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:58.493194   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:58.493224   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:58.493404   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:58.493597   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:58.493727   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:58.493879   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:58.575804   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0708 19:55:58.581606   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0708 19:55:58.593359   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0708 19:55:58.597934   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0708 19:55:58.608339   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0708 19:55:58.612329   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0708 19:55:58.622599   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0708 19:55:58.627590   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0708 19:55:58.639969   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0708 19:55:58.645042   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0708 19:55:58.658608   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0708 19:55:58.663411   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0708 19:55:58.675636   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 19:55:58.703616   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 19:55:58.731091   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 19:55:58.758232   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 19:55:58.786887   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0708 19:55:58.813878   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 19:55:58.841425   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 19:55:58.866914   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 19:55:58.892897   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 19:55:58.919388   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 19:55:58.946031   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 19:55:58.972792   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0708 19:55:58.992597   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0708 19:55:59.012052   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0708 19:55:59.030024   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0708 19:55:59.047587   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0708 19:55:59.065891   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0708 19:55:59.084561   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0708 19:55:59.102698   25689 ssh_runner.go:195] Run: openssl version
	I0708 19:55:59.108854   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 19:55:59.120506   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 19:55:59.125400   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 19:55:59.125468   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 19:55:59.132125   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 19:55:59.143357   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 19:55:59.154995   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:59.159827   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:59.159893   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:59.166112   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 19:55:59.177755   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 19:55:59.188869   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 19:55:59.193502   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 19:55:59.193560   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 19:55:59.199432   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 19:55:59.210498   25689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 19:55:59.214711   25689 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 19:55:59.214764   25689 kubeadm.go:928] updating node {m02 192.168.39.216 8443 v1.30.2 crio true true} ...
	I0708 19:55:59.214833   25689 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-511021-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 19:55:59.214857   25689 kube-vip.go:115] generating kube-vip config ...
	I0708 19:55:59.214891   25689 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0708 19:55:59.233565   25689 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0708 19:55:59.233649   25689 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0708 19:55:59.233712   25689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 19:55:59.244383   25689 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0708 19:55:59.244443   25689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0708 19:55:59.256531   25689 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0708 19:55:59.256559   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0708 19:55:59.256616   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0708 19:55:59.256653   25689 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0708 19:55:59.256682   25689 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0708 19:55:59.261172   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0708 19:55:59.261195   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0708 19:55:59.796847   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0708 19:55:59.796925   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0708 19:55:59.802130   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0708 19:55:59.802174   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0708 19:56:00.118058   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 19:56:00.133338   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0708 19:56:00.133447   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0708 19:56:00.137922   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0708 19:56:00.137960   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0708 19:56:00.572541   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0708 19:56:00.582919   25689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0708 19:56:00.601072   25689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 19:56:00.619999   25689 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0708 19:56:00.638081   25689 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0708 19:56:00.642218   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:56:00.657388   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:56:00.780308   25689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:56:00.798165   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:56:00.798622   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:56:00.798672   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:56:00.813316   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0708 19:56:00.813753   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:56:00.814217   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:56:00.814235   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:56:00.814594   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:56:00.814804   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:56:00.814972   25689 start.go:316] joinCluster: &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:56:00.815056   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0708 19:56:00.815077   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:56:00.817849   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:56:00.818257   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:56:00.818286   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:56:00.818501   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:56:00.818674   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:56:00.818849   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:56:00.819044   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:56:00.984474   25689 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:56:00.984518   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7yvtjh.6r8fpit8xu0pxizs --discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-511021-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443"
	I0708 19:56:24.646578   25689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7yvtjh.6r8fpit8xu0pxizs --discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-511021-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443": (23.662035066s)
	I0708 19:56:24.646617   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0708 19:56:25.243165   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-511021-m02 minikube.k8s.io/updated_at=2024_07_08T19_56_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=ha-511021 minikube.k8s.io/primary=false
	I0708 19:56:25.378158   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-511021-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0708 19:56:25.496399   25689 start.go:318] duration metric: took 24.6814294s to joinCluster
	I0708 19:56:25.496469   25689 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:56:25.496727   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:56:25.498340   25689 out.go:177] * Verifying Kubernetes components...
	I0708 19:56:25.499715   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:56:25.826747   25689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:56:25.905596   25689 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:56:25.905928   25689 kapi.go:59] client config for ha-511021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key", CAFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfdf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0708 19:56:25.906011   25689 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.33:8443
	I0708 19:56:25.906292   25689 node_ready.go:35] waiting up to 6m0s for node "ha-511021-m02" to be "Ready" ...
	I0708 19:56:25.906381   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:25.906391   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:25.906402   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:25.906410   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:25.920724   25689 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0708 19:56:26.407011   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:26.407037   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:26.407048   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:26.407055   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:26.437160   25689 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0708 19:56:26.907246   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:26.907268   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:26.907278   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:26.907283   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:26.912369   25689 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 19:56:27.407268   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:27.407289   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:27.407300   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:27.407308   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:27.410994   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:27.907201   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:27.907221   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:27.907229   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:27.907233   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:27.911148   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:27.911916   25689 node_ready.go:53] node "ha-511021-m02" has status "Ready":"False"
	I0708 19:56:28.406890   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:28.406908   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.406916   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.406919   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.412364   25689 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 19:56:28.907391   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:28.907408   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.907416   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.907420   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.912117   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:28.912669   25689 node_ready.go:49] node "ha-511021-m02" has status "Ready":"True"
	I0708 19:56:28.912685   25689 node_ready.go:38] duration metric: took 3.006371704s for node "ha-511021-m02" to be "Ready" ...
	I0708 19:56:28.912692   25689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:56:28.912758   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:56:28.912769   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.912778   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.912783   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.917610   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:28.925229   25689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.925311   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4lzjf
	I0708 19:56:28.925319   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.925326   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.925332   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.932858   25689 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0708 19:56:28.933481   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:28.933496   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.933503   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.933507   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.938196   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:28.938640   25689 pod_ready.go:92] pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:28.938655   25689 pod_ready.go:81] duration metric: took 13.40159ms for pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.938664   25689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.938717   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w6m9c
	I0708 19:56:28.938724   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.938731   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.938734   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.944566   25689 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 19:56:28.945307   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:28.945327   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.945337   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.945342   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.949220   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:28.949786   25689 pod_ready.go:92] pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:28.949806   25689 pod_ready.go:81] duration metric: took 11.135851ms for pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.949816   25689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.949867   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021
	I0708 19:56:28.949874   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.949883   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.949889   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.953241   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:28.953724   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:28.953739   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.953749   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.953753   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.956410   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:28.956940   25689 pod_ready.go:92] pod "etcd-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:28.956956   25689 pod_ready.go:81] duration metric: took 7.134034ms for pod "etcd-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.956970   25689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.957021   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:28.957029   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.957035   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.957038   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.959753   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:28.960270   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:28.960282   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.960289   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.960294   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.963659   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:29.457237   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:29.457258   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:29.457266   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:29.457271   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:29.460685   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:29.461279   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:29.461296   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:29.461304   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:29.461309   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:29.464229   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:29.958030   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:29.958054   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:29.958062   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:29.958066   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:29.962162   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:29.963050   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:29.963068   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:29.963079   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:29.963086   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:29.971876   25689 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0708 19:56:30.457159   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:30.457180   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:30.457186   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:30.457191   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:30.461184   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:30.461939   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:30.461953   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:30.461960   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:30.461965   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:30.464513   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:30.957877   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:30.957898   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:30.957908   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:30.957912   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:30.960807   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:30.961423   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:30.961439   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:30.961448   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:30.961454   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:30.963755   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:30.964245   25689 pod_ready.go:102] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"False"
	I0708 19:56:31.457384   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:31.457402   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:31.457410   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:31.457414   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:31.461633   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:31.462860   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:31.462875   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:31.462882   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:31.462887   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:31.466276   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:31.957965   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:31.957988   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:31.957999   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:31.958004   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:31.961523   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:31.962208   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:31.962230   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:31.962237   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:31.962241   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:31.964883   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:32.457861   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:32.457880   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:32.457888   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:32.457893   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:32.461623   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:32.462831   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:32.462846   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:32.462853   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:32.462856   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:32.465541   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:32.957608   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:32.957629   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:32.957637   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:32.957643   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:32.960569   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:32.961442   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:32.961462   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:32.961469   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:32.961473   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:32.964325   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:32.965059   25689 pod_ready.go:102] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"False"
	I0708 19:56:33.457240   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:33.457262   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:33.457270   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:33.457274   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:33.460238   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:33.460958   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:33.460974   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:33.460981   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:33.460984   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:33.463583   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:33.957976   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:33.958004   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:33.958017   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:33.958024   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:33.961284   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:33.962132   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:33.962148   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:33.962155   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:33.962159   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:33.971565   25689 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0708 19:56:34.457943   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:34.457963   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:34.457971   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:34.457974   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:34.460423   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:34.461215   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:34.461228   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:34.461235   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:34.461241   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:34.463519   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:34.958234   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:34.958260   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:34.958270   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:34.958275   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:34.961837   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:34.962627   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:34.962639   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:34.962646   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:34.962649   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:34.965512   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:34.966027   25689 pod_ready.go:102] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"False"
	I0708 19:56:35.457449   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:35.457470   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:35.457478   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:35.457482   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:35.462187   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:35.462747   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:35.462766   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:35.462774   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:35.462778   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:35.465123   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:35.957196   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:35.957219   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:35.957227   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:35.957231   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:35.960989   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:35.961703   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:35.961717   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:35.961724   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:35.961727   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:35.964654   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:36.457563   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:36.457585   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:36.457593   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:36.457598   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:36.460947   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:36.461856   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:36.461872   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:36.461883   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:36.461888   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:36.465329   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:36.957515   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:36.957538   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:36.957547   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:36.957552   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:36.960499   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:36.961013   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:36.961026   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:36.961036   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:36.961043   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:36.963318   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:37.457943   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:37.457963   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:37.457971   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:37.457974   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:37.461341   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:37.461969   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:37.461982   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:37.461990   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:37.461994   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:37.464663   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:37.465295   25689 pod_ready.go:102] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"False"
	I0708 19:56:37.958043   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:37.958065   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:37.958073   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:37.958077   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:37.962163   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:37.962751   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:37.962763   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:37.962771   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:37.962776   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:37.965420   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:38.457426   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:38.457448   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:38.457456   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:38.457461   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:38.461786   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:38.462829   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:38.462849   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:38.462861   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:38.462869   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:38.465874   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:38.957927   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:38.957952   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:38.957963   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:38.957969   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:38.961903   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:38.962559   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:38.962574   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:38.962581   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:38.962585   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:38.965818   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:39.457844   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:39.457866   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:39.457873   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:39.457877   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:39.461052   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:39.461888   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:39.461907   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:39.461917   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:39.461923   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:39.464623   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:39.957240   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:39.957261   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:39.957269   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:39.957275   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:39.961320   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:39.962029   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:39.962046   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:39.962063   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:39.962069   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:39.964966   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:39.965582   25689 pod_ready.go:102] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"False"
	I0708 19:56:40.458018   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:40.458039   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:40.458050   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:40.458056   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:40.461399   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:40.462016   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:40.462033   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:40.462043   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:40.462048   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:40.464946   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:40.957232   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:40.957249   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:40.957257   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:40.957261   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:40.961326   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:40.962034   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:40.962047   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:40.962054   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:40.962059   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:40.965054   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:41.457910   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:41.457930   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:41.457937   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:41.457942   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:41.461521   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:41.462272   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:41.462285   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:41.462292   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:41.462298   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:41.464954   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:41.957133   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:41.957154   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:41.957161   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:41.957167   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:41.960945   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:41.961842   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:41.961857   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:41.961865   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:41.961868   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:41.964417   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.457263   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:42.457286   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.457294   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.457298   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.460751   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:42.461505   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:42.461518   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.461525   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.461530   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.464456   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.465281   25689 pod_ready.go:92] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.465297   25689 pod_ready.go:81] duration metric: took 13.508321083s for pod "etcd-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.465311   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.465355   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021
	I0708 19:56:42.465362   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.465369   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.465373   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.468275   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.468875   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:42.468891   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.468898   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.468900   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.471097   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.471559   25689 pod_ready.go:92] pod "kube-apiserver-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.471575   25689 pod_ready.go:81] duration metric: took 6.259ms for pod "kube-apiserver-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.471583   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.471628   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m02
	I0708 19:56:42.471636   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.471642   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.471645   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.474045   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.475028   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:42.475050   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.475057   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.475063   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.477184   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.477671   25689 pod_ready.go:92] pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.477687   25689 pod_ready.go:81] duration metric: took 6.098489ms for pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.477695   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.477758   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021
	I0708 19:56:42.477766   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.477773   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.477777   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.479977   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.480456   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:42.480468   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.480475   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.480478   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.482861   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.483338   25689 pod_ready.go:92] pod "kube-controller-manager-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.483355   25689 pod_ready.go:81] duration metric: took 5.653907ms for pod "kube-controller-manager-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.483364   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.483425   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m02
	I0708 19:56:42.483435   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.483465   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.483477   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.486028   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.486998   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:42.487014   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.487021   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.487027   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.489165   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.489813   25689 pod_ready.go:92] pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.489841   25689 pod_ready.go:81] duration metric: took 6.459082ms for pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.489854   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-976tb" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.658256   25689 request.go:629] Waited for 168.328911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-976tb
	I0708 19:56:42.658308   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-976tb
	I0708 19:56:42.658313   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.658320   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.658324   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.661739   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:42.857764   25689 request.go:629] Waited for 195.466462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:42.857835   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:42.857841   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.857850   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.857860   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.861038   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:42.861784   25689 pod_ready.go:92] pod "kube-proxy-976tb" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.861805   25689 pod_ready.go:81] duration metric: took 371.940121ms for pod "kube-proxy-976tb" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.861819   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmkjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:43.057930   25689 request.go:629] Waited for 196.046623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmkjf
	I0708 19:56:43.058009   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmkjf
	I0708 19:56:43.058022   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:43.058032   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:43.058042   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:43.062026   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:43.257353   25689 request.go:629] Waited for 194.208854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:43.257424   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:43.257432   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:43.257442   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:43.257446   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:43.260627   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:43.261256   25689 pod_ready.go:92] pod "kube-proxy-tmkjf" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:43.261275   25689 pod_ready.go:81] duration metric: took 399.449111ms for pod "kube-proxy-tmkjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:43.261287   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:43.458216   25689 request.go:629] Waited for 196.846469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021
	I0708 19:56:43.458297   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021
	I0708 19:56:43.458304   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:43.458318   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:43.458330   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:43.461848   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:43.657812   25689 request.go:629] Waited for 195.372871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:43.657888   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:43.657897   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:43.657905   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:43.657911   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:43.661064   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:43.661857   25689 pod_ready.go:92] pod "kube-scheduler-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:43.661879   25689 pod_ready.go:81] duration metric: took 400.583933ms for pod "kube-scheduler-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:43.661892   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:43.857959   25689 request.go:629] Waited for 196.003992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m02
	I0708 19:56:43.858020   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m02
	I0708 19:56:43.858025   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:43.858032   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:43.858040   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:43.861046   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:44.057987   25689 request.go:629] Waited for 196.36324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:44.058072   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:44.058081   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.058092   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.058097   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.061538   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:44.062231   25689 pod_ready.go:92] pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:44.062251   25689 pod_ready.go:81] duration metric: took 400.352378ms for pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:44.062265   25689 pod_ready.go:38] duration metric: took 15.149561086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:56:44.062283   25689 api_server.go:52] waiting for apiserver process to appear ...
	I0708 19:56:44.062342   25689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 19:56:44.079005   25689 api_server.go:72] duration metric: took 18.582500945s to wait for apiserver process to appear ...
	I0708 19:56:44.079035   25689 api_server.go:88] waiting for apiserver healthz status ...
	I0708 19:56:44.079055   25689 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I0708 19:56:44.085015   25689 api_server.go:279] https://192.168.39.33:8443/healthz returned 200:
	ok
	I0708 19:56:44.085079   25689 round_trippers.go:463] GET https://192.168.39.33:8443/version
	I0708 19:56:44.085091   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.085101   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.085107   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.085909   25689 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 19:56:44.086045   25689 api_server.go:141] control plane version: v1.30.2
	I0708 19:56:44.086065   25689 api_server.go:131] duration metric: took 7.022616ms to wait for apiserver health ...
	I0708 19:56:44.086073   25689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 19:56:44.257297   25689 request.go:629] Waited for 171.152608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:56:44.257346   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:56:44.257354   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.257361   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.257368   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.262269   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:44.266549   25689 system_pods.go:59] 17 kube-system pods found
	I0708 19:56:44.266576   25689 system_pods.go:61] "coredns-7db6d8ff4d-4lzjf" [4bcfc11d-8368-4c95-bf64-5b3d09c4b455] Running
	I0708 19:56:44.266581   25689 system_pods.go:61] "coredns-7db6d8ff4d-w6m9c" [8f45dd66-3096-4878-8b2b-96dcf12bbef2] Running
	I0708 19:56:44.266586   25689 system_pods.go:61] "etcd-ha-511021" [52134689-3a05-4bfa-ae28-2696f8bf0ccb] Running
	I0708 19:56:44.266590   25689 system_pods.go:61] "etcd-ha-511021-m02" [acc2d6d9-6796-453d-a5bb-492c28c5eb94] Running
	I0708 19:56:44.266593   25689 system_pods.go:61] "kindnet-4f49v" [1f0b50ca-73cb-4ffb-9676-09e3a28d7636] Running
	I0708 19:56:44.266596   25689 system_pods.go:61] "kindnet-gn8kn" [68f966e1-e40c-4e6e-8fa4-d3167090fa7c] Running
	I0708 19:56:44.266599   25689 system_pods.go:61] "kube-apiserver-ha-511021" [e5f0c179-18b9-40ce-9c9c-bfe810f6a422] Running
	I0708 19:56:44.266602   25689 system_pods.go:61] "kube-apiserver-ha-511021-m02" [33e08ded-e75f-4f56-8d52-5447d025d348] Running
	I0708 19:56:44.266606   25689 system_pods.go:61] "kube-controller-manager-ha-511021" [136879af-0997-416e-956a-632e940e1da6] Running
	I0708 19:56:44.266609   25689 system_pods.go:61] "kube-controller-manager-ha-511021-m02" [a5d3e392-c4f1-4784-b234-e57a5e9689a9] Running
	I0708 19:56:44.266611   25689 system_pods.go:61] "kube-proxy-976tb" [97fd998d-9281-40b0-bd6d-cebf8d4bfa02] Running
	I0708 19:56:44.266614   25689 system_pods.go:61] "kube-proxy-tmkjf" [fb7c00aa-f846-430e-92a2-04cd2fc8a62b] Running
	I0708 19:56:44.266617   25689 system_pods.go:61] "kube-scheduler-ha-511021" [978f9f3f-1bfe-4d9c-9dcf-5a410f101c87] Running
	I0708 19:56:44.266620   25689 system_pods.go:61] "kube-scheduler-ha-511021-m02" [3a4313c1-625d-4ba1-873f-da3ae493f1b5] Running
	I0708 19:56:44.266623   25689 system_pods.go:61] "kube-vip-ha-511021" [c2d1c07a-51ae-4264-9fbc-fd7af40ac2d0] Running
	I0708 19:56:44.266628   25689 system_pods.go:61] "kube-vip-ha-511021-m02" [ebc968ae-70c7-45ac-aa9b-ddc9e7142f71] Running
	I0708 19:56:44.266633   25689 system_pods.go:61] "storage-provisioner" [7d02def4-3af1-4268-a8fa-072c6fd71c83] Running
	I0708 19:56:44.266638   25689 system_pods.go:74] duration metric: took 180.557225ms to wait for pod list to return data ...
	I0708 19:56:44.266647   25689 default_sa.go:34] waiting for default service account to be created ...
	I0708 19:56:44.458065   25689 request.go:629] Waited for 191.353602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/default/serviceaccounts
	I0708 19:56:44.458123   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/default/serviceaccounts
	I0708 19:56:44.458131   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.458142   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.458151   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.461390   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:44.461671   25689 default_sa.go:45] found service account: "default"
	I0708 19:56:44.461692   25689 default_sa.go:55] duration metric: took 195.038543ms for default service account to be created ...
	I0708 19:56:44.461703   25689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 19:56:44.657832   25689 request.go:629] Waited for 196.060395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:56:44.657907   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:56:44.657919   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.657930   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.657937   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.663091   25689 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 19:56:44.667327   25689 system_pods.go:86] 17 kube-system pods found
	I0708 19:56:44.667349   25689 system_pods.go:89] "coredns-7db6d8ff4d-4lzjf" [4bcfc11d-8368-4c95-bf64-5b3d09c4b455] Running
	I0708 19:56:44.667355   25689 system_pods.go:89] "coredns-7db6d8ff4d-w6m9c" [8f45dd66-3096-4878-8b2b-96dcf12bbef2] Running
	I0708 19:56:44.667359   25689 system_pods.go:89] "etcd-ha-511021" [52134689-3a05-4bfa-ae28-2696f8bf0ccb] Running
	I0708 19:56:44.667363   25689 system_pods.go:89] "etcd-ha-511021-m02" [acc2d6d9-6796-453d-a5bb-492c28c5eb94] Running
	I0708 19:56:44.667367   25689 system_pods.go:89] "kindnet-4f49v" [1f0b50ca-73cb-4ffb-9676-09e3a28d7636] Running
	I0708 19:56:44.667371   25689 system_pods.go:89] "kindnet-gn8kn" [68f966e1-e40c-4e6e-8fa4-d3167090fa7c] Running
	I0708 19:56:44.667375   25689 system_pods.go:89] "kube-apiserver-ha-511021" [e5f0c179-18b9-40ce-9c9c-bfe810f6a422] Running
	I0708 19:56:44.667379   25689 system_pods.go:89] "kube-apiserver-ha-511021-m02" [33e08ded-e75f-4f56-8d52-5447d025d348] Running
	I0708 19:56:44.667384   25689 system_pods.go:89] "kube-controller-manager-ha-511021" [136879af-0997-416e-956a-632e940e1da6] Running
	I0708 19:56:44.667388   25689 system_pods.go:89] "kube-controller-manager-ha-511021-m02" [a5d3e392-c4f1-4784-b234-e57a5e9689a9] Running
	I0708 19:56:44.667391   25689 system_pods.go:89] "kube-proxy-976tb" [97fd998d-9281-40b0-bd6d-cebf8d4bfa02] Running
	I0708 19:56:44.667395   25689 system_pods.go:89] "kube-proxy-tmkjf" [fb7c00aa-f846-430e-92a2-04cd2fc8a62b] Running
	I0708 19:56:44.667398   25689 system_pods.go:89] "kube-scheduler-ha-511021" [978f9f3f-1bfe-4d9c-9dcf-5a410f101c87] Running
	I0708 19:56:44.667402   25689 system_pods.go:89] "kube-scheduler-ha-511021-m02" [3a4313c1-625d-4ba1-873f-da3ae493f1b5] Running
	I0708 19:56:44.667405   25689 system_pods.go:89] "kube-vip-ha-511021" [c2d1c07a-51ae-4264-9fbc-fd7af40ac2d0] Running
	I0708 19:56:44.667410   25689 system_pods.go:89] "kube-vip-ha-511021-m02" [ebc968ae-70c7-45ac-aa9b-ddc9e7142f71] Running
	I0708 19:56:44.667414   25689 system_pods.go:89] "storage-provisioner" [7d02def4-3af1-4268-a8fa-072c6fd71c83] Running
	I0708 19:56:44.667421   25689 system_pods.go:126] duration metric: took 205.709311ms to wait for k8s-apps to be running ...
	I0708 19:56:44.667431   25689 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 19:56:44.667495   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 19:56:44.683752   25689 system_svc.go:56] duration metric: took 16.313272ms WaitForService to wait for kubelet
	I0708 19:56:44.683777   25689 kubeadm.go:576] duration metric: took 19.187277697s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 19:56:44.683793   25689 node_conditions.go:102] verifying NodePressure condition ...
	I0708 19:56:44.857511   25689 request.go:629] Waited for 173.65485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes
	I0708 19:56:44.857556   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes
	I0708 19:56:44.857580   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.857590   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.857597   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.861147   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:44.862054   25689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:56:44.862081   25689 node_conditions.go:123] node cpu capacity is 2
	I0708 19:56:44.862095   25689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:56:44.862099   25689 node_conditions.go:123] node cpu capacity is 2
	I0708 19:56:44.862103   25689 node_conditions.go:105] duration metric: took 178.305226ms to run NodePressure ...
	I0708 19:56:44.862112   25689 start.go:240] waiting for startup goroutines ...
	I0708 19:56:44.862138   25689 start.go:254] writing updated cluster config ...
	I0708 19:56:44.864101   25689 out.go:177] 
	I0708 19:56:44.865447   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:56:44.865533   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:56:44.866955   25689 out.go:177] * Starting "ha-511021-m03" control-plane node in "ha-511021" cluster
	I0708 19:56:44.867966   25689 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:56:44.867985   25689 cache.go:56] Caching tarball of preloaded images
	I0708 19:56:44.868084   25689 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 19:56:44.868097   25689 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 19:56:44.868191   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:56:44.868369   25689 start.go:360] acquireMachinesLock for ha-511021-m03: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 19:56:44.868416   25689 start.go:364] duration metric: took 26.562µs to acquireMachinesLock for "ha-511021-m03"
	I0708 19:56:44.868439   25689 start.go:93] Provisioning new machine with config: &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:56:44.868539   25689 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0708 19:56:44.869965   25689 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 19:56:44.870070   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:56:44.870101   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:56:44.886541   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0708 19:56:44.886928   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:56:44.887333   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:56:44.887352   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:56:44.887678   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:56:44.887821   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetMachineName
	I0708 19:56:44.887950   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:56:44.888103   25689 start.go:159] libmachine.API.Create for "ha-511021" (driver="kvm2")
	I0708 19:56:44.888137   25689 client.go:168] LocalClient.Create starting
	I0708 19:56:44.888174   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 19:56:44.888208   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:56:44.888227   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:56:44.888354   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 19:56:44.888393   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:56:44.888410   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:56:44.888450   25689 main.go:141] libmachine: Running pre-create checks...
	I0708 19:56:44.888463   25689 main.go:141] libmachine: (ha-511021-m03) Calling .PreCreateCheck
	I0708 19:56:44.888624   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetConfigRaw
	I0708 19:56:44.889028   25689 main.go:141] libmachine: Creating machine...
	I0708 19:56:44.889043   25689 main.go:141] libmachine: (ha-511021-m03) Calling .Create
	I0708 19:56:44.889149   25689 main.go:141] libmachine: (ha-511021-m03) Creating KVM machine...
	I0708 19:56:44.890401   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found existing default KVM network
	I0708 19:56:44.890531   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found existing private KVM network mk-ha-511021
	I0708 19:56:44.890628   25689 main.go:141] libmachine: (ha-511021-m03) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03 ...
	I0708 19:56:44.890659   25689 main.go:141] libmachine: (ha-511021-m03) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 19:56:44.890702   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:44.890623   26453 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:56:44.890791   25689 main.go:141] libmachine: (ha-511021-m03) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 19:56:45.108556   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:45.108427   26453 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa...
	I0708 19:56:45.347415   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:45.347305   26453 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/ha-511021-m03.rawdisk...
	I0708 19:56:45.347464   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Writing magic tar header
	I0708 19:56:45.347479   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Writing SSH key tar header
	I0708 19:56:45.347531   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:45.347475   26453 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03 ...
	I0708 19:56:45.347614   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03
	I0708 19:56:45.347642   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 19:56:45.347652   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:56:45.347664   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 19:56:45.347672   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 19:56:45.347683   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03 (perms=drwx------)
	I0708 19:56:45.347695   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 19:56:45.347710   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 19:56:45.347726   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins
	I0708 19:56:45.347740   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home
	I0708 19:56:45.347748   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Skipping /home - not owner
	I0708 19:56:45.347761   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 19:56:45.347773   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 19:56:45.347785   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 19:56:45.347794   25689 main.go:141] libmachine: (ha-511021-m03) Creating domain...
	I0708 19:56:45.348887   25689 main.go:141] libmachine: (ha-511021-m03) define libvirt domain using xml: 
	I0708 19:56:45.348905   25689 main.go:141] libmachine: (ha-511021-m03) <domain type='kvm'>
	I0708 19:56:45.348913   25689 main.go:141] libmachine: (ha-511021-m03)   <name>ha-511021-m03</name>
	I0708 19:56:45.348918   25689 main.go:141] libmachine: (ha-511021-m03)   <memory unit='MiB'>2200</memory>
	I0708 19:56:45.348924   25689 main.go:141] libmachine: (ha-511021-m03)   <vcpu>2</vcpu>
	I0708 19:56:45.348930   25689 main.go:141] libmachine: (ha-511021-m03)   <features>
	I0708 19:56:45.348935   25689 main.go:141] libmachine: (ha-511021-m03)     <acpi/>
	I0708 19:56:45.348944   25689 main.go:141] libmachine: (ha-511021-m03)     <apic/>
	I0708 19:56:45.348948   25689 main.go:141] libmachine: (ha-511021-m03)     <pae/>
	I0708 19:56:45.348958   25689 main.go:141] libmachine: (ha-511021-m03)     
	I0708 19:56:45.348981   25689 main.go:141] libmachine: (ha-511021-m03)   </features>
	I0708 19:56:45.348999   25689 main.go:141] libmachine: (ha-511021-m03)   <cpu mode='host-passthrough'>
	I0708 19:56:45.349005   25689 main.go:141] libmachine: (ha-511021-m03)   
	I0708 19:56:45.349011   25689 main.go:141] libmachine: (ha-511021-m03)   </cpu>
	I0708 19:56:45.349020   25689 main.go:141] libmachine: (ha-511021-m03)   <os>
	I0708 19:56:45.349031   25689 main.go:141] libmachine: (ha-511021-m03)     <type>hvm</type>
	I0708 19:56:45.349041   25689 main.go:141] libmachine: (ha-511021-m03)     <boot dev='cdrom'/>
	I0708 19:56:45.349052   25689 main.go:141] libmachine: (ha-511021-m03)     <boot dev='hd'/>
	I0708 19:56:45.349064   25689 main.go:141] libmachine: (ha-511021-m03)     <bootmenu enable='no'/>
	I0708 19:56:45.349070   25689 main.go:141] libmachine: (ha-511021-m03)   </os>
	I0708 19:56:45.349075   25689 main.go:141] libmachine: (ha-511021-m03)   <devices>
	I0708 19:56:45.349089   25689 main.go:141] libmachine: (ha-511021-m03)     <disk type='file' device='cdrom'>
	I0708 19:56:45.349099   25689 main.go:141] libmachine: (ha-511021-m03)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/boot2docker.iso'/>
	I0708 19:56:45.349106   25689 main.go:141] libmachine: (ha-511021-m03)       <target dev='hdc' bus='scsi'/>
	I0708 19:56:45.349113   25689 main.go:141] libmachine: (ha-511021-m03)       <readonly/>
	I0708 19:56:45.349123   25689 main.go:141] libmachine: (ha-511021-m03)     </disk>
	I0708 19:56:45.349135   25689 main.go:141] libmachine: (ha-511021-m03)     <disk type='file' device='disk'>
	I0708 19:56:45.349147   25689 main.go:141] libmachine: (ha-511021-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 19:56:45.349161   25689 main.go:141] libmachine: (ha-511021-m03)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/ha-511021-m03.rawdisk'/>
	I0708 19:56:45.349171   25689 main.go:141] libmachine: (ha-511021-m03)       <target dev='hda' bus='virtio'/>
	I0708 19:56:45.349177   25689 main.go:141] libmachine: (ha-511021-m03)     </disk>
	I0708 19:56:45.349184   25689 main.go:141] libmachine: (ha-511021-m03)     <interface type='network'>
	I0708 19:56:45.349217   25689 main.go:141] libmachine: (ha-511021-m03)       <source network='mk-ha-511021'/>
	I0708 19:56:45.349241   25689 main.go:141] libmachine: (ha-511021-m03)       <model type='virtio'/>
	I0708 19:56:45.349253   25689 main.go:141] libmachine: (ha-511021-m03)     </interface>
	I0708 19:56:45.349265   25689 main.go:141] libmachine: (ha-511021-m03)     <interface type='network'>
	I0708 19:56:45.349278   25689 main.go:141] libmachine: (ha-511021-m03)       <source network='default'/>
	I0708 19:56:45.349290   25689 main.go:141] libmachine: (ha-511021-m03)       <model type='virtio'/>
	I0708 19:56:45.349313   25689 main.go:141] libmachine: (ha-511021-m03)     </interface>
	I0708 19:56:45.349335   25689 main.go:141] libmachine: (ha-511021-m03)     <serial type='pty'>
	I0708 19:56:45.349347   25689 main.go:141] libmachine: (ha-511021-m03)       <target port='0'/>
	I0708 19:56:45.349353   25689 main.go:141] libmachine: (ha-511021-m03)     </serial>
	I0708 19:56:45.349363   25689 main.go:141] libmachine: (ha-511021-m03)     <console type='pty'>
	I0708 19:56:45.349375   25689 main.go:141] libmachine: (ha-511021-m03)       <target type='serial' port='0'/>
	I0708 19:56:45.349387   25689 main.go:141] libmachine: (ha-511021-m03)     </console>
	I0708 19:56:45.349401   25689 main.go:141] libmachine: (ha-511021-m03)     <rng model='virtio'>
	I0708 19:56:45.349428   25689 main.go:141] libmachine: (ha-511021-m03)       <backend model='random'>/dev/random</backend>
	I0708 19:56:45.349448   25689 main.go:141] libmachine: (ha-511021-m03)     </rng>
	I0708 19:56:45.349460   25689 main.go:141] libmachine: (ha-511021-m03)     
	I0708 19:56:45.349468   25689 main.go:141] libmachine: (ha-511021-m03)     
	I0708 19:56:45.349475   25689 main.go:141] libmachine: (ha-511021-m03)   </devices>
	I0708 19:56:45.349482   25689 main.go:141] libmachine: (ha-511021-m03) </domain>
	I0708 19:56:45.349491   25689 main.go:141] libmachine: (ha-511021-m03) 
	I0708 19:56:45.356148   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:5c:5a:59 in network default
	I0708 19:56:45.356744   25689 main.go:141] libmachine: (ha-511021-m03) Ensuring networks are active...
	I0708 19:56:45.356770   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:45.357591   25689 main.go:141] libmachine: (ha-511021-m03) Ensuring network default is active
	I0708 19:56:45.357886   25689 main.go:141] libmachine: (ha-511021-m03) Ensuring network mk-ha-511021 is active
	I0708 19:56:45.358227   25689 main.go:141] libmachine: (ha-511021-m03) Getting domain xml...
	I0708 19:56:45.358881   25689 main.go:141] libmachine: (ha-511021-m03) Creating domain...
	I0708 19:56:46.618992   25689 main.go:141] libmachine: (ha-511021-m03) Waiting to get IP...
	I0708 19:56:46.619693   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:46.620162   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:46.620198   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:46.620133   26453 retry.go:31] will retry after 202.321963ms: waiting for machine to come up
	I0708 19:56:46.824561   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:46.825051   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:46.825077   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:46.824997   26453 retry.go:31] will retry after 306.03783ms: waiting for machine to come up
	I0708 19:56:47.132473   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:47.132887   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:47.132913   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:47.132847   26453 retry.go:31] will retry after 374.380364ms: waiting for machine to come up
	I0708 19:56:47.508241   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:47.508620   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:47.508650   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:47.508576   26453 retry.go:31] will retry after 424.568331ms: waiting for machine to come up
	I0708 19:56:47.935212   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:47.935636   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:47.935659   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:47.935572   26453 retry.go:31] will retry after 606.237869ms: waiting for machine to come up
	I0708 19:56:48.544043   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:48.544527   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:48.544594   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:48.544492   26453 retry.go:31] will retry after 739.656893ms: waiting for machine to come up
	I0708 19:56:49.285546   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:49.285947   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:49.285976   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:49.285897   26453 retry.go:31] will retry after 855.924967ms: waiting for machine to come up
	I0708 19:56:50.142964   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:50.143355   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:50.143382   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:50.143326   26453 retry.go:31] will retry after 1.301147226s: waiting for machine to come up
	I0708 19:56:51.446073   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:51.446554   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:51.446579   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:51.446512   26453 retry.go:31] will retry after 1.222212721s: waiting for machine to come up
	I0708 19:56:52.670715   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:52.671102   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:52.671129   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:52.671068   26453 retry.go:31] will retry after 1.712355758s: waiting for machine to come up
	I0708 19:56:54.386067   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:54.386567   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:54.386595   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:54.386515   26453 retry.go:31] will retry after 2.80539565s: waiting for machine to come up
	I0708 19:56:57.194500   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:57.194933   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:57.194961   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:57.194870   26453 retry.go:31] will retry after 2.897013176s: waiting for machine to come up
	I0708 19:57:00.093476   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:00.093952   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:57:00.093992   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:57:00.093908   26453 retry.go:31] will retry after 2.750912917s: waiting for machine to come up
	I0708 19:57:02.847826   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:02.848235   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:57:02.848256   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:57:02.848198   26453 retry.go:31] will retry after 5.060992583s: waiting for machine to come up
	I0708 19:57:07.913251   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:07.913644   25689 main.go:141] libmachine: (ha-511021-m03) Found IP for machine: 192.168.39.70
	I0708 19:57:07.913665   25689 main.go:141] libmachine: (ha-511021-m03) Reserving static IP address...
	I0708 19:57:07.913675   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has current primary IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:07.914029   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find host DHCP lease matching {name: "ha-511021-m03", mac: "52:54:00:a7:80:5b", ip: "192.168.39.70"} in network mk-ha-511021
	I0708 19:57:07.988510   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Getting to WaitForSSH function...
	I0708 19:57:07.988533   25689 main.go:141] libmachine: (ha-511021-m03) Reserved static IP address: 192.168.39.70
	I0708 19:57:07.988546   25689 main.go:141] libmachine: (ha-511021-m03) Waiting for SSH to be available...
	I0708 19:57:07.991237   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:07.991735   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:07.991766   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:07.991828   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Using SSH client type: external
	I0708 19:57:07.991853   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa (-rw-------)
	I0708 19:57:07.991885   25689 main.go:141] libmachine: (ha-511021-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 19:57:07.991897   25689 main.go:141] libmachine: (ha-511021-m03) DBG | About to run SSH command:
	I0708 19:57:07.991909   25689 main.go:141] libmachine: (ha-511021-m03) DBG | exit 0
	I0708 19:57:08.123650   25689 main.go:141] libmachine: (ha-511021-m03) DBG | SSH cmd err, output: <nil>: 
	I0708 19:57:08.123964   25689 main.go:141] libmachine: (ha-511021-m03) KVM machine creation complete!
	I0708 19:57:08.124211   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetConfigRaw
	I0708 19:57:08.124710   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:08.124897   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:08.125023   25689 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 19:57:08.125038   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 19:57:08.126437   25689 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 19:57:08.126453   25689 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 19:57:08.126460   25689 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 19:57:08.126469   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.128873   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.129279   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.129304   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.129461   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.129629   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.129935   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.130075   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.130261   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:08.130499   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:08.130513   25689 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 19:57:08.242960   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:57:08.242985   25689 main.go:141] libmachine: Detecting the provisioner...
	I0708 19:57:08.242996   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.246088   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.246487   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.246516   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.246644   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.246839   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.246986   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.247113   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.247285   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:08.247499   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:08.247514   25689 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 19:57:08.360156   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 19:57:08.360236   25689 main.go:141] libmachine: found compatible host: buildroot
	I0708 19:57:08.360246   25689 main.go:141] libmachine: Provisioning with buildroot...
	I0708 19:57:08.360254   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetMachineName
	I0708 19:57:08.360497   25689 buildroot.go:166] provisioning hostname "ha-511021-m03"
	I0708 19:57:08.360529   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetMachineName
	I0708 19:57:08.360714   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.363569   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.363920   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.363945   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.364095   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.364268   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.364445   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.364604   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.364765   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:08.364920   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:08.364938   25689 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-511021-m03 && echo "ha-511021-m03" | sudo tee /etc/hostname
	I0708 19:57:08.493966   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-511021-m03
	
	I0708 19:57:08.493989   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.496619   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.497015   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.497033   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.497274   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.497451   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.497608   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.497736   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.497905   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:08.498061   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:08.498076   25689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-511021-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-511021-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-511021-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 19:57:08.621154   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:57:08.621184   25689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 19:57:08.621203   25689 buildroot.go:174] setting up certificates
	I0708 19:57:08.621214   25689 provision.go:84] configureAuth start
	I0708 19:57:08.621225   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetMachineName
	I0708 19:57:08.621493   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 19:57:08.624180   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.624618   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.624645   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.624812   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.626647   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.627041   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.627063   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.627250   25689 provision.go:143] copyHostCerts
	I0708 19:57:08.627272   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:57:08.627299   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 19:57:08.627307   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:57:08.627378   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 19:57:08.627458   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:57:08.627482   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 19:57:08.627491   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:57:08.627517   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 19:57:08.627566   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:57:08.627582   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 19:57:08.627588   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:57:08.627608   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 19:57:08.627653   25689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.ha-511021-m03 san=[127.0.0.1 192.168.39.70 ha-511021-m03 localhost minikube]
	I0708 19:57:08.709893   25689 provision.go:177] copyRemoteCerts
	I0708 19:57:08.709964   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 19:57:08.709992   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.713220   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.713630   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.713663   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.713839   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.714029   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.714234   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.714370   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 19:57:08.802081   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 19:57:08.802153   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 19:57:08.826514   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 19:57:08.826598   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0708 19:57:08.850785   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 19:57:08.850864   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 19:57:08.877535   25689 provision.go:87] duration metric: took 256.307129ms to configureAuth
	I0708 19:57:08.877566   25689 buildroot.go:189] setting minikube options for container-runtime
	I0708 19:57:08.877797   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:57:08.877882   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.880566   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.880976   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.881007   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.881173   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.881366   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.881548   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.881682   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.881850   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:08.882045   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:08.882066   25689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 19:57:09.161202   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 19:57:09.161235   25689 main.go:141] libmachine: Checking connection to Docker...
	I0708 19:57:09.161253   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetURL
	I0708 19:57:09.162410   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Using libvirt version 6000000
	I0708 19:57:09.164876   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.165221   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.165247   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.165357   25689 main.go:141] libmachine: Docker is up and running!
	I0708 19:57:09.165373   25689 main.go:141] libmachine: Reticulating splines...
	I0708 19:57:09.165380   25689 client.go:171] duration metric: took 24.277232778s to LocalClient.Create
	I0708 19:57:09.165403   25689 start.go:167] duration metric: took 24.277302306s to libmachine.API.Create "ha-511021"
	I0708 19:57:09.165415   25689 start.go:293] postStartSetup for "ha-511021-m03" (driver="kvm2")
	I0708 19:57:09.165428   25689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 19:57:09.165448   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:09.165644   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 19:57:09.165664   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:09.167745   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.168020   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.168040   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.168196   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:09.168385   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:09.168535   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:09.168658   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 19:57:09.259553   25689 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 19:57:09.263695   25689 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 19:57:09.263720   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 19:57:09.263780   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 19:57:09.264006   25689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 19:57:09.264033   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /etc/ssl/certs/131412.pem
	I0708 19:57:09.264200   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 19:57:09.275837   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:57:09.303157   25689 start.go:296] duration metric: took 137.729964ms for postStartSetup
	I0708 19:57:09.303199   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetConfigRaw
	I0708 19:57:09.303835   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 19:57:09.306642   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.307136   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.307161   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.307485   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:57:09.307693   25689 start.go:128] duration metric: took 24.439141413s to createHost
	I0708 19:57:09.307716   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:09.310073   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.310482   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.310509   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.310692   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:09.310896   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:09.311038   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:09.311213   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:09.311468   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:09.311663   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:09.311679   25689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 19:57:09.428333   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720468629.404388528
	
	I0708 19:57:09.428367   25689 fix.go:216] guest clock: 1720468629.404388528
	I0708 19:57:09.428378   25689 fix.go:229] Guest: 2024-07-08 19:57:09.404388528 +0000 UTC Remote: 2024-07-08 19:57:09.307705167 +0000 UTC m=+149.693400321 (delta=96.683361ms)
	I0708 19:57:09.428400   25689 fix.go:200] guest clock delta is within tolerance: 96.683361ms
	I0708 19:57:09.428408   25689 start.go:83] releasing machines lock for "ha-511021-m03", held for 24.559980204s
	I0708 19:57:09.428431   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:09.428694   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 19:57:09.431379   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.431749   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.431776   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.433988   25689 out.go:177] * Found network options:
	I0708 19:57:09.435267   25689 out.go:177]   - NO_PROXY=192.168.39.33,192.168.39.216
	W0708 19:57:09.436484   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	W0708 19:57:09.436507   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	I0708 19:57:09.436522   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:09.437152   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:09.437343   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:09.437438   25689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 19:57:09.437473   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	W0708 19:57:09.437536   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	W0708 19:57:09.437559   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	I0708 19:57:09.437621   25689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 19:57:09.437643   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:09.440477   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.440568   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.440793   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.440820   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.440952   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.440972   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.440989   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:09.441158   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:09.441174   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:09.441339   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:09.441352   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:09.441505   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:09.441501   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 19:57:09.441659   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 19:57:09.681469   25689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 19:57:09.687612   25689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 19:57:09.687692   25689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 19:57:09.704050   25689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 19:57:09.704073   25689 start.go:494] detecting cgroup driver to use...
	I0708 19:57:09.704129   25689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 19:57:09.720919   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 19:57:09.736474   25689 docker.go:217] disabling cri-docker service (if available) ...
	I0708 19:57:09.736540   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 19:57:09.751202   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 19:57:09.765460   25689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 19:57:09.890467   25689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 19:57:10.062358   25689 docker.go:233] disabling docker service ...
	I0708 19:57:10.062428   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 19:57:10.077177   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 19:57:10.090747   25689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 19:57:10.210122   25689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 19:57:10.325009   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 19:57:10.340324   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 19:57:10.360011   25689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 19:57:10.360073   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.372377   25689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 19:57:10.372447   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.383391   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.393837   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.404540   25689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 19:57:10.415811   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.428220   25689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.446649   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.457657   25689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 19:57:10.467320   25689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 19:57:10.467375   25689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 19:57:10.483062   25689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 19:57:10.493676   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:57:10.617943   25689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 19:57:10.751365   25689 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 19:57:10.751438   25689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 19:57:10.756529   25689 start.go:562] Will wait 60s for crictl version
	I0708 19:57:10.756589   25689 ssh_runner.go:195] Run: which crictl
	I0708 19:57:10.760562   25689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 19:57:10.804209   25689 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 19:57:10.804285   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:57:10.837994   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:57:10.870751   25689 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 19:57:10.872070   25689 out.go:177]   - env NO_PROXY=192.168.39.33
	I0708 19:57:10.873397   25689 out.go:177]   - env NO_PROXY=192.168.39.33,192.168.39.216
	I0708 19:57:10.874843   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 19:57:10.877528   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:10.877940   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:10.877971   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:10.878177   25689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 19:57:10.883258   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:57:10.896197   25689 mustload.go:65] Loading cluster: ha-511021
	I0708 19:57:10.896452   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:57:10.896728   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:57:10.896773   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:57:10.912477   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42533
	I0708 19:57:10.912904   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:57:10.913330   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:57:10.913350   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:57:10.913687   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:57:10.913889   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:57:10.915501   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:57:10.915765   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:57:10.915795   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:57:10.932616   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0708 19:57:10.933215   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:57:10.933656   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:57:10.933676   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:57:10.933973   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:57:10.934145   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:57:10.934320   25689 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021 for IP: 192.168.39.70
	I0708 19:57:10.934334   25689 certs.go:194] generating shared ca certs ...
	I0708 19:57:10.934353   25689 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:57:10.934505   25689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 19:57:10.934564   25689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 19:57:10.934579   25689 certs.go:256] generating profile certs ...
	I0708 19:57:10.934675   25689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key
	I0708 19:57:10.934706   25689 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a97293a6
	I0708 19:57:10.934727   25689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a97293a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.33 192.168.39.216 192.168.39.70 192.168.39.254]
	I0708 19:57:11.186337   25689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a97293a6 ...
	I0708 19:57:11.186366   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a97293a6: {Name:mk4a8d0195207cfa7335a3764eebf9c499e522fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:57:11.186539   25689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a97293a6 ...
	I0708 19:57:11.186554   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a97293a6: {Name:mkbf5807ae56dc882b5c365ab0ded64ac1264cab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:57:11.186648   25689 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a97293a6 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt
	I0708 19:57:11.186792   25689 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a97293a6 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key
	I0708 19:57:11.186948   25689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key
	I0708 19:57:11.186964   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 19:57:11.186983   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 19:57:11.187003   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 19:57:11.187023   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 19:57:11.187041   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 19:57:11.187060   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 19:57:11.187079   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 19:57:11.187096   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 19:57:11.187153   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 19:57:11.187190   25689 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 19:57:11.187203   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 19:57:11.187236   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 19:57:11.187271   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 19:57:11.187301   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 19:57:11.187353   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:57:11.187388   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem -> /usr/share/ca-certificates/13141.pem
	I0708 19:57:11.187408   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /usr/share/ca-certificates/131412.pem
	I0708 19:57:11.187426   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:57:11.187480   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:57:11.190361   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:57:11.190768   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:57:11.190794   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:57:11.191018   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:57:11.191208   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:57:11.191318   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:57:11.191435   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:57:11.267789   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0708 19:57:11.276963   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0708 19:57:11.290610   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0708 19:57:11.296404   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0708 19:57:11.306894   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0708 19:57:11.311355   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0708 19:57:11.323594   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0708 19:57:11.328064   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0708 19:57:11.350627   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0708 19:57:11.355118   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0708 19:57:11.365649   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0708 19:57:11.369585   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0708 19:57:11.382513   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 19:57:11.410413   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 19:57:11.439035   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 19:57:11.466164   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 19:57:11.490615   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0708 19:57:11.517748   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 19:57:11.544235   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 19:57:11.570357   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 19:57:11.596155   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 19:57:11.621728   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 19:57:11.648124   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 19:57:11.676095   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0708 19:57:11.694076   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0708 19:57:11.711753   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0708 19:57:11.729257   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0708 19:57:11.747066   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0708 19:57:11.764137   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0708 19:57:11.781356   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0708 19:57:11.798294   25689 ssh_runner.go:195] Run: openssl version
	I0708 19:57:11.804430   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 19:57:11.815489   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 19:57:11.820172   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 19:57:11.820236   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 19:57:11.826389   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 19:57:11.838649   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 19:57:11.850789   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 19:57:11.855920   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 19:57:11.855978   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 19:57:11.861937   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 19:57:11.873594   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 19:57:11.885448   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:57:11.891160   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:57:11.891230   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:57:11.897885   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 19:57:11.909930   25689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 19:57:11.914536   25689 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 19:57:11.914590   25689 kubeadm.go:928] updating node {m03 192.168.39.70 8443 v1.30.2 crio true true} ...
	I0708 19:57:11.914677   25689 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-511021-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 19:57:11.914700   25689 kube-vip.go:115] generating kube-vip config ...
	I0708 19:57:11.914733   25689 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0708 19:57:11.935213   25689 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0708 19:57:11.935286   25689 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0708 19:57:11.935352   25689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 19:57:11.946569   25689 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0708 19:57:11.946619   25689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0708 19:57:11.957244   25689 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0708 19:57:11.957259   25689 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0708 19:57:11.957270   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0708 19:57:11.957304   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 19:57:11.957244   25689 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0708 19:57:11.957360   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0708 19:57:11.957341   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0708 19:57:11.957439   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0708 19:57:11.963872   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0708 19:57:11.963926   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0708 19:57:11.984081   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0708 19:57:11.984142   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0708 19:57:11.984170   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0708 19:57:11.984233   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0708 19:57:12.039949   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0708 19:57:12.039992   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0708 19:57:12.892198   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0708 19:57:12.904526   25689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0708 19:57:12.922036   25689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 19:57:12.939969   25689 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0708 19:57:12.957406   25689 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0708 19:57:12.961650   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:57:12.975896   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:57:13.105518   25689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:57:13.124811   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:57:13.125236   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:57:13.125289   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:57:13.140275   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0708 19:57:13.140755   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:57:13.141333   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:57:13.141360   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:57:13.141704   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:57:13.141913   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:57:13.142087   25689 start.go:316] joinCluster: &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:57:13.142222   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0708 19:57:13.142236   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:57:13.145111   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:57:13.145521   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:57:13.145550   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:57:13.145666   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:57:13.145857   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:57:13.146009   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:57:13.146126   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:57:13.305793   25689 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:57:13.305843   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zpusg1.50b8ceh2h8t3zmox --discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-511021-m03 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443"
	I0708 19:57:37.116998   25689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zpusg1.50b8ceh2h8t3zmox --discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-511021-m03 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443": (23.811131572s)
	I0708 19:57:37.117034   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0708 19:57:37.730737   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-511021-m03 minikube.k8s.io/updated_at=2024_07_08T19_57_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=ha-511021 minikube.k8s.io/primary=false
	I0708 19:57:37.871695   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-511021-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0708 19:57:38.005853   25689 start.go:318] duration metric: took 24.863762753s to joinCluster
	I0708 19:57:38.005940   25689 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:57:38.006297   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:57:38.007273   25689 out.go:177] * Verifying Kubernetes components...
	I0708 19:57:38.008505   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:57:38.302547   25689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:57:38.352466   25689 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:57:38.352816   25689 kapi.go:59] client config for ha-511021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key", CAFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfdf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0708 19:57:38.352892   25689 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.33:8443
	I0708 19:57:38.353168   25689 node_ready.go:35] waiting up to 6m0s for node "ha-511021-m03" to be "Ready" ...
	I0708 19:57:38.353256   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:38.353267   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:38.353277   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:38.353284   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:38.359797   25689 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0708 19:57:38.854247   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:38.854271   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:38.854284   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:38.854291   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:38.858471   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:39.353450   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:39.353473   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:39.353485   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:39.353491   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:39.356708   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:39.854112   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:39.854132   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:39.854140   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:39.854145   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:39.857558   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:40.354410   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:40.354436   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:40.354447   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:40.354451   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:40.357844   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:40.358477   25689 node_ready.go:53] node "ha-511021-m03" has status "Ready":"False"
	I0708 19:57:40.854117   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:40.854141   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:40.854152   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:40.854159   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:40.857149   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:41.354299   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:41.354321   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.354332   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.354338   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.363211   25689 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0708 19:57:41.363749   25689 node_ready.go:49] node "ha-511021-m03" has status "Ready":"True"
	I0708 19:57:41.363766   25689 node_ready.go:38] duration metric: took 3.010579366s for node "ha-511021-m03" to be "Ready" ...
	I0708 19:57:41.363773   25689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:57:41.363848   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:57:41.363864   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.363873   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.363879   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.371276   25689 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0708 19:57:41.380480   25689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.380556   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4lzjf
	I0708 19:57:41.380562   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.380569   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.380575   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.384033   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.384956   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:41.384974   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.384982   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.384993   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.388152   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.389129   25689 pod_ready.go:92] pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:41.389149   25689 pod_ready.go:81] duration metric: took 8.642992ms for pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.389161   25689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.389236   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w6m9c
	I0708 19:57:41.389248   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.389258   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.389263   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.392877   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.393697   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:41.393715   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.393725   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.393731   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.397534   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.398149   25689 pod_ready.go:92] pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:41.398170   25689 pod_ready.go:81] duration metric: took 9.001626ms for pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.398182   25689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.398269   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021
	I0708 19:57:41.398281   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.398290   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.398304   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.402297   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.403133   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:41.403154   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.403164   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.403168   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.406676   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.407108   25689 pod_ready.go:92] pod "etcd-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:41.407130   25689 pod_ready.go:81] duration metric: took 8.931313ms for pod "etcd-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.407147   25689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.407202   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:57:41.407209   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.407216   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.407220   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.410135   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:41.410991   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:41.411036   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.411058   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.411075   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.414039   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:41.414551   25689 pod_ready.go:92] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:41.414565   25689 pod_ready.go:81] duration metric: took 7.409427ms for pod "etcd-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.414572   25689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.554912   25689 request.go:629] Waited for 140.281013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:41.554994   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:41.555026   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.555041   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.555050   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.558653   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.755277   25689 request.go:629] Waited for 195.961416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:41.755330   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:41.755337   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.755348   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.755358   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.759119   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.955133   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:41.955151   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.955160   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.955165   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.959763   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:42.155305   25689 request.go:629] Waited for 194.363685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:42.155378   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:42.155396   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:42.155408   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:42.155418   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:42.158785   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:42.415634   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:42.415658   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:42.415670   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:42.415675   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:42.419326   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:42.554891   25689 request.go:629] Waited for 134.134036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:42.554946   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:42.554958   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:42.554966   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:42.554971   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:42.558375   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:42.915736   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:42.915763   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:42.915787   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:42.915793   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:42.919479   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:42.954776   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:42.954798   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:42.954809   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:42.954814   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:42.958277   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:43.414764   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:43.414786   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:43.414793   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:43.414796   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:43.418460   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:43.419505   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:43.419521   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:43.419528   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:43.419532   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:43.422207   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:43.422813   25689 pod_ready.go:102] pod "etcd-ha-511021-m03" in "kube-system" namespace has status "Ready":"False"
	I0708 19:57:43.915164   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:43.915191   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:43.915203   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:43.915209   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:43.918664   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:43.919728   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:43.919745   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:43.919751   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:43.919755   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:43.922887   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:44.415794   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:44.415818   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:44.415829   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:44.415833   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:44.419405   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:44.420650   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:44.420669   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:44.420676   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:44.420680   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:44.423531   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:44.914939   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:44.914960   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:44.914968   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:44.914973   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:44.918699   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:44.919571   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:44.919585   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:44.919594   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:44.919600   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:44.922941   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:45.414943   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:45.414973   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.414982   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.414987   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.419810   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:45.420552   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:45.420569   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.420578   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.420583   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.424046   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:45.424598   25689 pod_ready.go:92] pod "etcd-ha-511021-m03" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:45.424614   25689 pod_ready.go:81] duration metric: took 4.01003595s for pod "etcd-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.424630   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.424697   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021
	I0708 19:57:45.424706   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.424714   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.424716   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.427584   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:45.428318   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:45.428330   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.428336   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.428342   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.431249   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:45.431900   25689 pod_ready.go:92] pod "kube-apiserver-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:45.431920   25689 pod_ready.go:81] duration metric: took 7.282529ms for pod "kube-apiserver-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.431930   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.555284   25689 request.go:629] Waited for 123.294572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m02
	I0708 19:57:45.555358   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m02
	I0708 19:57:45.555365   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.555380   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.555391   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.558718   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:45.755073   25689 request.go:629] Waited for 195.375359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:45.755152   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:45.755164   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.755175   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.755182   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.759746   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:45.760556   25689 pod_ready.go:92] pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:45.760575   25689 pod_ready.go:81] duration metric: took 328.639072ms for pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.760584   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.955219   25689 request.go:629] Waited for 194.56747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:45.955276   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:45.955281   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.955289   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.955295   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.958428   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:46.154506   25689 request.go:629] Waited for 195.29988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.154584   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.154593   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:46.154601   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:46.154604   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:46.158314   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:46.355049   25689 request.go:629] Waited for 94.258126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:46.355101   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:46.355106   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:46.355113   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:46.355119   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:46.358409   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:46.554332   25689 request.go:629] Waited for 195.282095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.554394   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.554402   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:46.554413   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:46.554423   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:46.557464   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:46.760925   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:46.760947   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:46.760957   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:46.760963   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:46.764864   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:46.955142   25689 request.go:629] Waited for 189.344829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.955234   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.955245   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:46.955256   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:46.955269   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:46.959166   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:47.260826   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:47.260848   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:47.260856   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:47.260862   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:47.265501   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:47.354423   25689 request.go:629] Waited for 88.21471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:47.354482   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:47.354497   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:47.354505   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:47.354513   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:47.357779   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:47.760952   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:47.760972   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:47.760983   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:47.760990   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:47.765651   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:47.766553   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:47.766571   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:47.766581   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:47.766589   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:47.770481   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:47.770998   25689 pod_ready.go:102] pod "kube-apiserver-ha-511021-m03" in "kube-system" namespace has status "Ready":"False"
	I0708 19:57:48.261756   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:48.261775   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:48.261783   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:48.261787   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:48.265718   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:48.266725   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:48.266741   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:48.266748   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:48.266753   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:48.270180   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:48.760971   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:48.760991   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:48.760999   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:48.761003   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:48.764632   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:48.765264   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:48.765282   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:48.765290   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:48.765294   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:48.768362   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:49.261193   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:49.261218   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.261230   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.261238   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.264469   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:49.265198   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:49.265215   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.265225   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.265233   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.268019   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:49.761403   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:49.761423   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.761432   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.761440   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.764714   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:49.765551   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:49.765570   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.765578   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.765583   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.768464   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:49.769030   25689 pod_ready.go:92] pod "kube-apiserver-ha-511021-m03" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:49.769048   25689 pod_ready.go:81] duration metric: took 4.008458309s for pod "kube-apiserver-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:49.769057   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:49.769104   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021
	I0708 19:57:49.769111   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.769117   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.769120   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.772289   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:49.773063   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:49.773079   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.773089   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.773095   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.776244   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:49.776766   25689 pod_ready.go:92] pod "kube-controller-manager-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:49.776782   25689 pod_ready.go:81] duration metric: took 7.71841ms for pod "kube-controller-manager-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:49.776793   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:49.954505   25689 request.go:629] Waited for 177.649705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m02
	I0708 19:57:49.954594   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m02
	I0708 19:57:49.954603   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.954611   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.954616   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.958008   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:50.154438   25689 request.go:629] Waited for 195.274615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:50.154522   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:50.154532   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:50.154539   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:50.154544   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:50.157669   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:50.158300   25689 pod_ready.go:92] pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:50.158323   25689 pod_ready.go:81] duration metric: took 381.520984ms for pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:50.158337   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:50.355341   25689 request.go:629] Waited for 196.911618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:50.355396   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:50.355401   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:50.355408   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:50.355412   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:50.358900   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:50.554299   25689 request.go:629] Waited for 194.290121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:50.554374   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:50.554383   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:50.554391   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:50.554396   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:50.557576   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:50.754809   25689 request.go:629] Waited for 96.282836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:50.754906   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:50.754918   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:50.754930   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:50.754942   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:50.758615   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:50.954346   25689 request.go:629] Waited for 195.002132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:50.954418   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:50.954426   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:50.954433   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:50.954439   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:50.957665   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:51.159292   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:51.159316   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:51.159324   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:51.159327   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:51.162565   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:51.354777   25689 request.go:629] Waited for 191.386264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:51.354825   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:51.354830   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:51.354839   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:51.354847   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:51.358170   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:51.659184   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:51.659210   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:51.659220   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:51.659228   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:51.662583   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:51.755145   25689 request.go:629] Waited for 91.841217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:51.755215   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:51.755227   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:51.755238   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:51.755245   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:51.758802   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:52.158674   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:52.158693   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.158700   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.158705   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.163055   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:52.164359   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:52.164398   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.164411   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.164420   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.166824   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:52.167411   25689 pod_ready.go:102] pod "kube-controller-manager-ha-511021-m03" in "kube-system" namespace has status "Ready":"False"
	I0708 19:57:52.658703   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:52.658729   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.658738   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.658742   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.663232   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:52.664122   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:52.664141   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.664152   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.664159   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.666495   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:52.667011   25689 pod_ready.go:92] pod "kube-controller-manager-ha-511021-m03" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:52.667033   25689 pod_ready.go:81] duration metric: took 2.508688698s for pod "kube-controller-manager-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:52.667046   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-976tb" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:52.754283   25689 request.go:629] Waited for 87.167609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-976tb
	I0708 19:57:52.754353   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-976tb
	I0708 19:57:52.754365   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.754376   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.754384   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.757538   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:52.954385   25689 request.go:629] Waited for 196.291943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:52.954518   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:52.954558   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.954579   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.954598   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.958408   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:52.958987   25689 pod_ready.go:92] pod "kube-proxy-976tb" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:52.959003   25689 pod_ready.go:81] duration metric: took 291.95006ms for pod "kube-proxy-976tb" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:52.959013   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-scxw5" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:53.154569   25689 request.go:629] Waited for 195.476879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scxw5
	I0708 19:57:53.154629   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scxw5
	I0708 19:57:53.154634   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:53.154641   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:53.154649   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:53.157795   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:53.354830   25689 request.go:629] Waited for 196.38475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:53.354891   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:53.354899   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:53.354909   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:53.354919   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:53.358447   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:53.359336   25689 pod_ready.go:92] pod "kube-proxy-scxw5" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:53.359361   25689 pod_ready.go:81] duration metric: took 400.338804ms for pod "kube-proxy-scxw5" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:53.359373   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmkjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:53.554796   25689 request.go:629] Waited for 195.355417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmkjf
	I0708 19:57:53.554866   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmkjf
	I0708 19:57:53.554875   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:53.554885   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:53.554892   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:53.557837   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:53.755051   25689 request.go:629] Waited for 196.38985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:53.755121   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:53.755128   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:53.755142   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:53.755152   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:53.758094   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:53.758768   25689 pod_ready.go:92] pod "kube-proxy-tmkjf" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:53.758788   25689 pod_ready.go:81] duration metric: took 399.40706ms for pod "kube-proxy-tmkjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:53.758799   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:53.954953   25689 request.go:629] Waited for 196.08838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021
	I0708 19:57:53.955016   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021
	I0708 19:57:53.955022   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:53.955029   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:53.955034   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:53.958507   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:54.154992   25689 request.go:629] Waited for 195.336631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:54.155064   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:54.155070   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.155077   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.155084   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.158017   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:54.158667   25689 pod_ready.go:92] pod "kube-scheduler-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:54.158690   25689 pod_ready.go:81] duration metric: took 399.882729ms for pod "kube-scheduler-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:54.158701   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:54.354770   25689 request.go:629] Waited for 196.005297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m02
	I0708 19:57:54.354848   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m02
	I0708 19:57:54.354856   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.354864   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.354867   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.358191   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:54.555142   25689 request.go:629] Waited for 196.370626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:54.555223   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:54.555231   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.555242   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.555248   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.558273   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:54.559024   25689 pod_ready.go:92] pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:54.559045   25689 pod_ready.go:81] duration metric: took 400.33803ms for pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:54.559055   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:54.755323   25689 request.go:629] Waited for 196.208181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m03
	I0708 19:57:54.755393   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m03
	I0708 19:57:54.755399   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.755406   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.755412   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.758976   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:54.954303   25689 request.go:629] Waited for 194.776882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:54.954363   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:54.954368   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.954375   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.954381   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.957356   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:54.957974   25689 pod_ready.go:92] pod "kube-scheduler-ha-511021-m03" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:54.957994   25689 pod_ready.go:81] duration metric: took 398.931537ms for pod "kube-scheduler-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:54.958005   25689 pod_ready.go:38] duration metric: took 13.594221303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:57:54.958022   25689 api_server.go:52] waiting for apiserver process to appear ...
	I0708 19:57:54.958071   25689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 19:57:54.975511   25689 api_server.go:72] duration metric: took 16.969531319s to wait for apiserver process to appear ...
	I0708 19:57:54.975538   25689 api_server.go:88] waiting for apiserver healthz status ...
	I0708 19:57:54.975558   25689 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I0708 19:57:54.979920   25689 api_server.go:279] https://192.168.39.33:8443/healthz returned 200:
	ok
	I0708 19:57:54.979975   25689 round_trippers.go:463] GET https://192.168.39.33:8443/version
	I0708 19:57:54.979981   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.979988   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.979992   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.981252   25689 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 19:57:54.981319   25689 api_server.go:141] control plane version: v1.30.2
	I0708 19:57:54.981337   25689 api_server.go:131] duration metric: took 5.791915ms to wait for apiserver health ...
	I0708 19:57:54.981345   25689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 19:57:55.154749   25689 request.go:629] Waited for 173.325339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:57:55.154819   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:57:55.154826   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:55.154834   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:55.154839   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:55.161344   25689 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0708 19:57:55.169278   25689 system_pods.go:59] 24 kube-system pods found
	I0708 19:57:55.169307   25689 system_pods.go:61] "coredns-7db6d8ff4d-4lzjf" [4bcfc11d-8368-4c95-bf64-5b3d09c4b455] Running
	I0708 19:57:55.169312   25689 system_pods.go:61] "coredns-7db6d8ff4d-w6m9c" [8f45dd66-3096-4878-8b2b-96dcf12bbef2] Running
	I0708 19:57:55.169317   25689 system_pods.go:61] "etcd-ha-511021" [52134689-3a05-4bfa-ae28-2696f8bf0ccb] Running
	I0708 19:57:55.169321   25689 system_pods.go:61] "etcd-ha-511021-m02" [acc2d6d9-6796-453d-a5bb-492c28c5eb94] Running
	I0708 19:57:55.169324   25689 system_pods.go:61] "etcd-ha-511021-m03" [abc1be6f-b619-440b-b6b0-12a99f7f78f1] Running
	I0708 19:57:55.169327   25689 system_pods.go:61] "kindnet-4f49v" [1f0b50ca-73cb-4ffb-9676-09e3a28d7636] Running
	I0708 19:57:55.169330   25689 system_pods.go:61] "kindnet-gn8kn" [68f966e1-e40c-4e6e-8fa4-d3167090fa7c] Running
	I0708 19:57:55.169333   25689 system_pods.go:61] "kindnet-kfpzq" [8400c214-1e12-4869-9d9f-c8d872e29156] Running
	I0708 19:57:55.169336   25689 system_pods.go:61] "kube-apiserver-ha-511021" [e5f0c179-18b9-40ce-9c9c-bfe810f6a422] Running
	I0708 19:57:55.169339   25689 system_pods.go:61] "kube-apiserver-ha-511021-m02" [33e08ded-e75f-4f56-8d52-5447d025d348] Running
	I0708 19:57:55.169342   25689 system_pods.go:61] "kube-apiserver-ha-511021-m03" [ec75847c-55d5-4c98-9fd0-1ee345ff8f77] Running
	I0708 19:57:55.169345   25689 system_pods.go:61] "kube-controller-manager-ha-511021" [136879af-0997-416e-956a-632e940e1da6] Running
	I0708 19:57:55.169348   25689 system_pods.go:61] "kube-controller-manager-ha-511021-m02" [a5d3e392-c4f1-4784-b234-e57a5e9689a9] Running
	I0708 19:57:55.169352   25689 system_pods.go:61] "kube-controller-manager-ha-511021-m03" [9447741b-bf2a-47b5-a3a5-131b27ff0401] Running
	I0708 19:57:55.169354   25689 system_pods.go:61] "kube-proxy-976tb" [97fd998d-9281-40b0-bd6d-cebf8d4bfa02] Running
	I0708 19:57:55.169357   25689 system_pods.go:61] "kube-proxy-scxw5" [6a01e530-81f0-495a-a9a3-576ef3b0de36] Running
	I0708 19:57:55.169360   25689 system_pods.go:61] "kube-proxy-tmkjf" [fb7c00aa-f846-430e-92a2-04cd2fc8a62b] Running
	I0708 19:57:55.169363   25689 system_pods.go:61] "kube-scheduler-ha-511021" [978f9f3f-1bfe-4d9c-9dcf-5a410f101c87] Running
	I0708 19:57:55.169367   25689 system_pods.go:61] "kube-scheduler-ha-511021-m02" [3a4313c1-625d-4ba1-873f-da3ae493f1b5] Running
	I0708 19:57:55.169370   25689 system_pods.go:61] "kube-scheduler-ha-511021-m03" [32ac0620-f107-4073-9a1d-54bae7ce0823] Running
	I0708 19:57:55.169375   25689 system_pods.go:61] "kube-vip-ha-511021" [c2d1c07a-51ae-4264-9fbc-fd7af40ac2d0] Running
	I0708 19:57:55.169378   25689 system_pods.go:61] "kube-vip-ha-511021-m02" [ebc968ae-70c7-45ac-aa9b-ddc9e7142f71] Running
	I0708 19:57:55.169382   25689 system_pods.go:61] "kube-vip-ha-511021-m03" [3d6940a2-b7ef-4b14-a83a-32d61b4f98f4] Running
	I0708 19:57:55.169387   25689 system_pods.go:61] "storage-provisioner" [7d02def4-3af1-4268-a8fa-072c6fd71c83] Running
	I0708 19:57:55.169393   25689 system_pods.go:74] duration metric: took 188.039111ms to wait for pod list to return data ...
	I0708 19:57:55.169402   25689 default_sa.go:34] waiting for default service account to be created ...
	I0708 19:57:55.354813   25689 request.go:629] Waited for 185.34987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/default/serviceaccounts
	I0708 19:57:55.354866   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/default/serviceaccounts
	I0708 19:57:55.354872   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:55.354879   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:55.354884   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:55.358648   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:55.358782   25689 default_sa.go:45] found service account: "default"
	I0708 19:57:55.358799   25689 default_sa.go:55] duration metric: took 189.390221ms for default service account to be created ...
	I0708 19:57:55.358809   25689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 19:57:55.555161   25689 request.go:629] Waited for 196.272852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:57:55.555249   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:57:55.555260   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:55.555268   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:55.555272   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:55.563279   25689 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0708 19:57:55.569869   25689 system_pods.go:86] 24 kube-system pods found
	I0708 19:57:55.569898   25689 system_pods.go:89] "coredns-7db6d8ff4d-4lzjf" [4bcfc11d-8368-4c95-bf64-5b3d09c4b455] Running
	I0708 19:57:55.569903   25689 system_pods.go:89] "coredns-7db6d8ff4d-w6m9c" [8f45dd66-3096-4878-8b2b-96dcf12bbef2] Running
	I0708 19:57:55.569908   25689 system_pods.go:89] "etcd-ha-511021" [52134689-3a05-4bfa-ae28-2696f8bf0ccb] Running
	I0708 19:57:55.569913   25689 system_pods.go:89] "etcd-ha-511021-m02" [acc2d6d9-6796-453d-a5bb-492c28c5eb94] Running
	I0708 19:57:55.569917   25689 system_pods.go:89] "etcd-ha-511021-m03" [abc1be6f-b619-440b-b6b0-12a99f7f78f1] Running
	I0708 19:57:55.569921   25689 system_pods.go:89] "kindnet-4f49v" [1f0b50ca-73cb-4ffb-9676-09e3a28d7636] Running
	I0708 19:57:55.569925   25689 system_pods.go:89] "kindnet-gn8kn" [68f966e1-e40c-4e6e-8fa4-d3167090fa7c] Running
	I0708 19:57:55.569933   25689 system_pods.go:89] "kindnet-kfpzq" [8400c214-1e12-4869-9d9f-c8d872e29156] Running
	I0708 19:57:55.569937   25689 system_pods.go:89] "kube-apiserver-ha-511021" [e5f0c179-18b9-40ce-9c9c-bfe810f6a422] Running
	I0708 19:57:55.569940   25689 system_pods.go:89] "kube-apiserver-ha-511021-m02" [33e08ded-e75f-4f56-8d52-5447d025d348] Running
	I0708 19:57:55.569945   25689 system_pods.go:89] "kube-apiserver-ha-511021-m03" [ec75847c-55d5-4c98-9fd0-1ee345ff8f77] Running
	I0708 19:57:55.569952   25689 system_pods.go:89] "kube-controller-manager-ha-511021" [136879af-0997-416e-956a-632e940e1da6] Running
	I0708 19:57:55.569956   25689 system_pods.go:89] "kube-controller-manager-ha-511021-m02" [a5d3e392-c4f1-4784-b234-e57a5e9689a9] Running
	I0708 19:57:55.569962   25689 system_pods.go:89] "kube-controller-manager-ha-511021-m03" [9447741b-bf2a-47b5-a3a5-131b27ff0401] Running
	I0708 19:57:55.569966   25689 system_pods.go:89] "kube-proxy-976tb" [97fd998d-9281-40b0-bd6d-cebf8d4bfa02] Running
	I0708 19:57:55.569970   25689 system_pods.go:89] "kube-proxy-scxw5" [6a01e530-81f0-495a-a9a3-576ef3b0de36] Running
	I0708 19:57:55.569974   25689 system_pods.go:89] "kube-proxy-tmkjf" [fb7c00aa-f846-430e-92a2-04cd2fc8a62b] Running
	I0708 19:57:55.569978   25689 system_pods.go:89] "kube-scheduler-ha-511021" [978f9f3f-1bfe-4d9c-9dcf-5a410f101c87] Running
	I0708 19:57:55.569982   25689 system_pods.go:89] "kube-scheduler-ha-511021-m02" [3a4313c1-625d-4ba1-873f-da3ae493f1b5] Running
	I0708 19:57:55.569987   25689 system_pods.go:89] "kube-scheduler-ha-511021-m03" [32ac0620-f107-4073-9a1d-54bae7ce0823] Running
	I0708 19:57:55.569991   25689 system_pods.go:89] "kube-vip-ha-511021" [c2d1c07a-51ae-4264-9fbc-fd7af40ac2d0] Running
	I0708 19:57:55.569997   25689 system_pods.go:89] "kube-vip-ha-511021-m02" [ebc968ae-70c7-45ac-aa9b-ddc9e7142f71] Running
	I0708 19:57:55.570001   25689 system_pods.go:89] "kube-vip-ha-511021-m03" [3d6940a2-b7ef-4b14-a83a-32d61b4f98f4] Running
	I0708 19:57:55.570005   25689 system_pods.go:89] "storage-provisioner" [7d02def4-3af1-4268-a8fa-072c6fd71c83] Running
	I0708 19:57:55.570011   25689 system_pods.go:126] duration metric: took 211.19314ms to wait for k8s-apps to be running ...
	I0708 19:57:55.570020   25689 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 19:57:55.570079   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 19:57:55.586998   25689 system_svc.go:56] duration metric: took 16.970716ms WaitForService to wait for kubelet
	I0708 19:57:55.587021   25689 kubeadm.go:576] duration metric: took 17.581048799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 19:57:55.587041   25689 node_conditions.go:102] verifying NodePressure condition ...
	I0708 19:57:55.754351   25689 request.go:629] Waited for 167.245947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes
	I0708 19:57:55.754438   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes
	I0708 19:57:55.754450   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:55.754461   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:55.754470   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:55.759888   25689 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 19:57:55.761044   25689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:57:55.761063   25689 node_conditions.go:123] node cpu capacity is 2
	I0708 19:57:55.761077   25689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:57:55.761081   25689 node_conditions.go:123] node cpu capacity is 2
	I0708 19:57:55.761084   25689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:57:55.761087   25689 node_conditions.go:123] node cpu capacity is 2
	I0708 19:57:55.761091   25689 node_conditions.go:105] duration metric: took 174.046017ms to run NodePressure ...
	I0708 19:57:55.761104   25689 start.go:240] waiting for startup goroutines ...
	I0708 19:57:55.761130   25689 start.go:254] writing updated cluster config ...
	I0708 19:57:55.761437   25689 ssh_runner.go:195] Run: rm -f paused
	I0708 19:57:55.814873   25689 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 19:57:55.816928   25689 out.go:177] * Done! kubectl is now configured to use "ha-511021" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.343009875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63fc57a6-331c-4935-8359-3e3fc1ee1859 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.344311639Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe94a237-ba79-47d8-acb9-658f2b27217e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.344733575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720468883344714462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe94a237-ba79-47d8-acb9-658f2b27217e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.345312249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2598beed-1df3-4262-8e33-43ef52ea053d name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.345368421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2598beed-1df3-4262-8e33-43ef52ea053d name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.345582648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720468678300500015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535991010866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535980957678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efdf4f079d33157f227c1d53e6e122777f79d2ad8a8d3b8435680085b1d3a68,PodSandboxId:eaef8d52b039d91daa97e3d7bf2cf97fc0d8ed804cb932c4b85a80bef9d9fc93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1720468534377552262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9,PodSandboxId:f429df990fee63fd9c3c13b64f2baa48c08f6ef862689251b9ec13aaa2eddea3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17204685
32996636063,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720468532672412988,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8ad312a5acddb79be337823087ee2b87d36262359d11cd3661e4a31d3026ec,PodSandboxId:fc46a08650b0c113dca0fc2c08b563545e66b03a33e24cba90956eefb7a018d4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720468514032913032,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becedfb7466881b4e5bb5eeaa93d5ece,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720468512223596473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720468512188740790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67,PodSandboxId:15cc9c5cd6042f512709da858a518c73462ed5c54944466ad74f4ad42cb59e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720468512204616479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586,PodSandboxId:38bebe295e2bf82cd7b16e9b5f818475dd29df00260db1612a9b45d7b67f0879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720468512135109452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2598beed-1df3-4262-8e33-43ef52ea053d name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.384725931Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffae8e1d-8f42-44c1-9604-e9038192fc96 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.384844518Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffae8e1d-8f42-44c1-9604-e9038192fc96 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.386272984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8c8dacf-7834-4274-9a44-86ae30e0534a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.386708633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720468883386683194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8c8dacf-7834-4274-9a44-86ae30e0534a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.387405801Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc3208d4-2193-42c4-9f39-15c80345ca9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.387518320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc3208d4-2193-42c4-9f39-15c80345ca9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.387764207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720468678300500015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535991010866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535980957678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efdf4f079d33157f227c1d53e6e122777f79d2ad8a8d3b8435680085b1d3a68,PodSandboxId:eaef8d52b039d91daa97e3d7bf2cf97fc0d8ed804cb932c4b85a80bef9d9fc93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1720468534377552262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9,PodSandboxId:f429df990fee63fd9c3c13b64f2baa48c08f6ef862689251b9ec13aaa2eddea3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17204685
32996636063,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720468532672412988,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8ad312a5acddb79be337823087ee2b87d36262359d11cd3661e4a31d3026ec,PodSandboxId:fc46a08650b0c113dca0fc2c08b563545e66b03a33e24cba90956eefb7a018d4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720468514032913032,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becedfb7466881b4e5bb5eeaa93d5ece,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720468512223596473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720468512188740790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67,PodSandboxId:15cc9c5cd6042f512709da858a518c73462ed5c54944466ad74f4ad42cb59e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720468512204616479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586,PodSandboxId:38bebe295e2bf82cd7b16e9b5f818475dd29df00260db1612a9b45d7b67f0879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720468512135109452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc3208d4-2193-42c4-9f39-15c80345ca9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.427890486Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=4bc468d4-10ec-482d-ad1c-8be657626800 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.428182567Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-w8l78,Uid:0dc81a07-5014-49b4-9c2f-e1806d1705e3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468677055389758,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-08T19:57:56.734732222Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-w6m9c,Uid:8f45dd66-3096-4878-8b2b-96dcf12bbef2,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1720468535744709637,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-08T19:55:33.936152430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4lzjf,Uid:4bcfc11d-8368-4c95-bf64-5b3d09c4b455,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468535736991829,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-07-08T19:55:33.927631619Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eaef8d52b039d91daa97e3d7bf2cf97fc0d8ed804cb932c4b85a80bef9d9fc93,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7d02def4-3af1-4268-a8fa-072c6fd71c83,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468534244651760,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-08T19:55:33.935870180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&PodSandboxMetadata{Name:kube-proxy-tmkjf,Uid:fb7c00aa-f846-430e-92a2-04cd2fc8a62b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468532486306066,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-08T19:55:31.568614861Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f429df990fee63fd9c3c13b64f2baa48c08f6ef862689251b9ec13aaa2eddea3,Metadata:&PodSandboxMetadata{Name:kindnet-4f49v,Uid:1f0b50ca-73cb-4ffb-9676-09e3a28d7636,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468532485103050,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-08T19:55:31.559611901Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&PodSandboxMetadata{Name:etcd-ha-511021,Uid:d92a647e1bb34408bc27cdc3497f9940,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1720468511950568899,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.33:2379,kubernetes.io/config.hash: d92a647e1bb34408bc27cdc3497f9940,kubernetes.io/config.seen: 2024-07-08T19:55:11.470707981Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-511021,Uid:8c3ccf7626b62492304c03ada682e9ee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468511949511412,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b
62492304c03ada682e9ee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8c3ccf7626b62492304c03ada682e9ee,kubernetes.io/config.seen: 2024-07-08T19:55:11.470754700Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc46a08650b0c113dca0fc2c08b563545e66b03a33e24cba90956eefb7a018d4,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-511021,Uid:becedfb7466881b4e5bb5eeaa93d5ece,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468511948478205,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becedfb7466881b4e5bb5eeaa93d5ece,},Annotations:map[string]string{kubernetes.io/config.hash: becedfb7466881b4e5bb5eeaa93d5ece,kubernetes.io/config.seen: 2024-07-08T19:55:11.470755475Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:15cc9c5cd6042f512709da858a518c73462ed5c54944466ad74f4ad42cb59e35,Metadata:&PodSandboxMetadata{Name:kube-co
ntroller-manager-ha-511021,Uid:a571722211ffd00c8b1df39a68520333,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468511948000627,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a571722211ffd00c8b1df39a68520333,kubernetes.io/config.seen: 2024-07-08T19:55:11.470753563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:38bebe295e2bf82cd7b16e9b5f818475dd29df00260db1612a9b45d7b67f0879,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-511021,Uid:42b9f382d32fb78346f5160840013b51,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468511930189876,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.33:8443,kubernetes.io/config.hash: 42b9f382d32fb78346f5160840013b51,kubernetes.io/config.seen: 2024-07-08T19:55:11.470751755Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4bc468d4-10ec-482d-ad1c-8be657626800 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.429064522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bee2a09-1437-445b-8e3c-ea94e0e651c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.429147973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bee2a09-1437-445b-8e3c-ea94e0e651c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.429373070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720468678300500015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535991010866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535980957678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efdf4f079d33157f227c1d53e6e122777f79d2ad8a8d3b8435680085b1d3a68,PodSandboxId:eaef8d52b039d91daa97e3d7bf2cf97fc0d8ed804cb932c4b85a80bef9d9fc93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1720468534377552262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9,PodSandboxId:f429df990fee63fd9c3c13b64f2baa48c08f6ef862689251b9ec13aaa2eddea3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17204685
32996636063,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720468532672412988,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8ad312a5acddb79be337823087ee2b87d36262359d11cd3661e4a31d3026ec,PodSandboxId:fc46a08650b0c113dca0fc2c08b563545e66b03a33e24cba90956eefb7a018d4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720468514032913032,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becedfb7466881b4e5bb5eeaa93d5ece,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720468512223596473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720468512188740790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67,PodSandboxId:15cc9c5cd6042f512709da858a518c73462ed5c54944466ad74f4ad42cb59e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720468512204616479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586,PodSandboxId:38bebe295e2bf82cd7b16e9b5f818475dd29df00260db1612a9b45d7b67f0879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720468512135109452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0bee2a09-1437-445b-8e3c-ea94e0e651c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.443158969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11117c88-5037-4c3e-a802-3ef7d0183513 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.443249436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11117c88-5037-4c3e-a802-3ef7d0183513 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.444391983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=003f914e-0646-47b1-b2c5-c26c5460bec7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.444888050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720468883444864541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=003f914e-0646-47b1-b2c5-c26c5460bec7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.445779229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad0e7cbe-fe3a-4d5a-be5e-6acc566408c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.445984228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad0e7cbe-fe3a-4d5a-be5e-6acc566408c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:01:23 ha-511021 crio[678]: time="2024-07-08 20:01:23.446625627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720468678300500015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535991010866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535980957678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efdf4f079d33157f227c1d53e6e122777f79d2ad8a8d3b8435680085b1d3a68,PodSandboxId:eaef8d52b039d91daa97e3d7bf2cf97fc0d8ed804cb932c4b85a80bef9d9fc93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1720468534377552262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9,PodSandboxId:f429df990fee63fd9c3c13b64f2baa48c08f6ef862689251b9ec13aaa2eddea3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17204685
32996636063,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720468532672412988,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8ad312a5acddb79be337823087ee2b87d36262359d11cd3661e4a31d3026ec,PodSandboxId:fc46a08650b0c113dca0fc2c08b563545e66b03a33e24cba90956eefb7a018d4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720468514032913032,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becedfb7466881b4e5bb5eeaa93d5ece,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720468512223596473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720468512188740790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67,PodSandboxId:15cc9c5cd6042f512709da858a518c73462ed5c54944466ad74f4ad42cb59e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720468512204616479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586,PodSandboxId:38bebe295e2bf82cd7b16e9b5f818475dd29df00260db1612a9b45d7b67f0879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720468512135109452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad0e7cbe-fe3a-4d5a-be5e-6acc566408c1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f1ad4f76c216a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   b1cbe60f17e1a       busybox-fc5497c4f-w8l78
	6b083875d2679       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   a361ba0082084       coredns-7db6d8ff4d-w6m9c
	499dc5b41a3d6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   3765b2ad464be       coredns-7db6d8ff4d-4lzjf
	0efdf4f079d33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   eaef8d52b039d       storage-provisioner
	ef250a5d2c670       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      5 minutes ago       Running             kindnet-cni               0                   f429df990fee6       kindnet-4f49v
	67153dce61aaa       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      5 minutes ago       Running             kube-proxy                0                   8cba18d6a0140       kube-proxy-tmkjf
	dd8ad312a5acd       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   fc46a08650b0c       kube-vip-ha-511021
	08189f5ac12ce       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   2e4a76498c1cf       etcd-ha-511021
	0ed1c59e04eb8       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      6 minutes ago       Running             kube-controller-manager   0                   15cc9c5cd6042       kube-controller-manager-ha-511021
	019d794c36af8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      6 minutes ago       Running             kube-scheduler            0                   bc2b7b56fb60f       kube-scheduler-ha-511021
	e4326cf8a34b6       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      6 minutes ago       Running             kube-apiserver            0                   38bebe295e2bf       kube-apiserver-ha-511021
	
	
	==> coredns [499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7] <==
	[INFO] 10.244.0.4:59111 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000090789s
	[INFO] 10.244.0.4:36217 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001689218s
	[INFO] 10.244.2.2:60648 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000081401s
	[INFO] 10.244.1.2:34341 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003785945s
	[INFO] 10.244.1.2:60350 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225614s
	[INFO] 10.244.1.2:48742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218522s
	[INFO] 10.244.1.2:60141 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145244s
	[INFO] 10.244.0.4:58500 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001476805s
	[INFO] 10.244.0.4:53415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090934s
	[INFO] 10.244.0.4:60685 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159681s
	[INFO] 10.244.2.2:35117 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216541s
	[INFO] 10.244.2.2:56929 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000209242s
	[INFO] 10.244.2.2:57601 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099474s
	[INFO] 10.244.1.2:51767 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189518s
	[INFO] 10.244.1.2:53177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013929s
	[INFO] 10.244.0.4:44104 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095184s
	[INFO] 10.244.2.2:51012 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106956s
	[INFO] 10.244.2.2:37460 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124276s
	[INFO] 10.244.2.2:46238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124359s
	[INFO] 10.244.1.2:56514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153739s
	[INFO] 10.244.1.2:45870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000362406s
	[INFO] 10.244.0.4:54901 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101371s
	[INFO] 10.244.0.4:38430 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128119s
	[INFO] 10.244.0.4:59433 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112582s
	[INFO] 10.244.2.2:50495 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000089543s
	
	
	==> coredns [6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa] <==
	[INFO] 10.244.1.2:51626 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156089s
	[INFO] 10.244.1.2:56377 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010828331s
	[INFO] 10.244.1.2:38901 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119209s
	[INFO] 10.244.0.4:40100 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000072232s
	[INFO] 10.244.0.4:51493 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001936632s
	[INFO] 10.244.0.4:45493 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011856s
	[INFO] 10.244.0.4:43450 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049467s
	[INFO] 10.244.0.4:42950 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177837s
	[INFO] 10.244.2.2:44783 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001772539s
	[INFO] 10.244.2.2:60536 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011424s
	[INFO] 10.244.2.2:56160 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090498s
	[INFO] 10.244.2.2:60942 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001479529s
	[INFO] 10.244.2.2:59066 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078767s
	[INFO] 10.244.1.2:33094 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298986s
	[INFO] 10.244.1.2:41194 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092808s
	[INFO] 10.244.0.4:44172 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168392s
	[INFO] 10.244.0.4:47644 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085824s
	[INFO] 10.244.0.4:45776 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131918s
	[INFO] 10.244.2.2:53642 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164258s
	[INFO] 10.244.1.2:32877 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000282103s
	[INFO] 10.244.1.2:59022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013901s
	[INFO] 10.244.0.4:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129873s
	[INFO] 10.244.2.2:48648 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161626s
	[INFO] 10.244.2.2:59172 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147702s
	[INFO] 10.244.2.2:45542 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156821s
	
	
	==> describe nodes <==
	Name:               ha-511021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T19_55_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:55:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:01:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:58:22 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:58:22 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:58:22 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:58:22 +0000   Mon, 08 Jul 2024 19:55:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.33
	  Hostname:    ha-511021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b87893acdd9a476ea34795541f3789df
	  System UUID:                b87893ac-dd9a-476e-a347-95541f3789df
	  Boot ID:                    17494c0f-24c9-4604-bfc5-8f8d6538a4f6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w8l78              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 coredns-7db6d8ff4d-4lzjf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m51s
	  kube-system                 coredns-7db6d8ff4d-w6m9c             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m51s
	  kube-system                 etcd-ha-511021                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m5s
	  kube-system                 kindnet-4f49v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m52s
	  kube-system                 kube-apiserver-ha-511021             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-controller-manager-ha-511021    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-proxy-tmkjf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  kube-system                 kube-scheduler-ha-511021             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-vip-ha-511021                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m50s  kube-proxy       
	  Normal  Starting                 6m5s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m5s   kubelet          Node ha-511021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s   kubelet          Node ha-511021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s   kubelet          Node ha-511021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m52s  node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal  NodeReady                5m50s  kubelet          Node ha-511021 status is now: NodeReady
	  Normal  RegisteredNode           4m43s  node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal  RegisteredNode           3m31s  node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	
	
	Name:               ha-511021-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_56_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:56:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:58:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Jul 2024 19:58:23 +0000   Mon, 08 Jul 2024 19:59:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Jul 2024 19:58:23 +0000   Mon, 08 Jul 2024 19:59:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Jul 2024 19:58:23 +0000   Mon, 08 Jul 2024 19:59:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Jul 2024 19:58:23 +0000   Mon, 08 Jul 2024 19:59:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-511021-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 09ff24d6fb9848b0b108f4ecb99eedc3
	  System UUID:                09ff24d6-fb98-48b0-b108-f4ecb99eedc3
	  Boot ID:                    44b68e74-b329-4b25-97a6-3396a30d544a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xjfx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 etcd-ha-511021-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m
	  kube-system                 kindnet-gn8kn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m2s
	  kube-system                 kube-apiserver-ha-511021-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-controller-manager-ha-511021-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-976tb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-scheduler-ha-511021-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-vip-ha-511021-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m58s                kube-proxy       
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node ha-511021-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node ha-511021-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node ha-511021-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m43s                node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  RegisteredNode           3m31s                node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  NodeNotReady             107s                 node-controller  Node ha-511021-m02 status is now: NodeNotReady
	
	
	Name:               ha-511021-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_57_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:57:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:01:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:58:04 +0000   Mon, 08 Jul 2024 19:57:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:58:04 +0000   Mon, 08 Jul 2024 19:57:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:58:04 +0000   Mon, 08 Jul 2024 19:57:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:58:04 +0000   Mon, 08 Jul 2024 19:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-511021-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a1265a3cabd4e6aae62914cc287dffa
	  System UUID:                8a1265a3-cabd-4e6a-ae62-914cc287dffa
	  Boot ID:                    6affb020-1648-4456-b4d6-301592f6f240
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-x9p75                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 etcd-ha-511021-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m48s
	  kube-system                 kindnet-kfpzq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m50s
	  kube-system                 kube-apiserver-ha-511021-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ha-511021-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-scxw5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-scheduler-ha-511021-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-vip-ha-511021-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node ha-511021-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node ha-511021-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node ha-511021-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	  Normal  RegisteredNode           3m31s                  node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	
	
	Name:               ha-511021-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_58_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:58:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:01:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:59:04 +0000   Mon, 08 Jul 2024 19:58:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:59:04 +0000   Mon, 08 Jul 2024 19:58:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:59:04 +0000   Mon, 08 Jul 2024 19:58:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:59:04 +0000   Mon, 08 Jul 2024 19:58:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    ha-511021-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef479bd2efc3487eb39d936b4399c97b
	  System UUID:                ef479bd2-efc3-487e-b39d-936b4399c97b
	  Boot ID:                    9e902555-dfb9-4fff-947a-24e55fd76688
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bbbp6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m49s
	  kube-system                 kube-proxy-7mb58    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m50s (x2 over 2m50s)  kubelet          Node ha-511021-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x2 over 2m50s)  kubelet          Node ha-511021-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x2 over 2m50s)  kubelet          Node ha-511021-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal  RegisteredNode           2m46s                  node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-511021-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul 8 19:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050477] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040158] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.560798] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.360481] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.523061] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul 8 19:55] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.119364] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.209787] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.142097] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.285009] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.308511] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.058301] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.483782] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.535916] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.022132] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.103961] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.289495] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.234845] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e] <==
	{"level":"warn","ts":"2024-07-08T20:01:23.732567Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.733374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.743537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.750924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.766953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.770675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.77475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.782185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.788777Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.796529Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.800474Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.803718Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.811884Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.820386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.827842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.828024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.831534Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.833553Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.834962Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.84098Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.847347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.85351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.915942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.917691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:01:23.933183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:01:23 up 6 min,  0 users,  load average: 0.14, 0.20, 0.11
	Linux ha-511021 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9] <==
	I0708 20:00:44.296308       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:00:54.302568       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:00:54.302609       1 main.go:227] handling current node
	I0708 20:00:54.302620       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:00:54.302625       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:00:54.302729       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0708 20:00:54.302751       1 main.go:250] Node ha-511021-m03 has CIDR [10.244.2.0/24] 
	I0708 20:00:54.302868       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:00:54.302892       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:01:04.316287       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:01:04.316330       1 main.go:227] handling current node
	I0708 20:01:04.316344       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:01:04.316349       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:01:04.316464       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0708 20:01:04.316490       1 main.go:250] Node ha-511021-m03 has CIDR [10.244.2.0/24] 
	I0708 20:01:04.316556       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:01:04.316579       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:01:14.323327       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:01:14.323529       1 main.go:227] handling current node
	I0708 20:01:14.323579       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:01:14.323600       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:01:14.323742       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0708 20:01:14.323762       1 main.go:250] Node ha-511021-m03 has CIDR [10.244.2.0/24] 
	I0708 20:01:14.323903       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:01:14.323931       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586] <==
	W0708 19:55:17.747429       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.33]
	I0708 19:55:17.748563       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 19:55:17.753732       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 19:55:17.928724       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 19:55:18.874618       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 19:55:18.900685       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0708 19:55:18.919491       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 19:55:31.486461       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0708 19:55:32.033454       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0708 19:57:59.835641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45684: use of closed network connection
	E0708 19:58:00.036890       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45702: use of closed network connection
	E0708 19:58:00.227515       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45718: use of closed network connection
	E0708 19:58:00.442844       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45738: use of closed network connection
	E0708 19:58:00.628129       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45752: use of closed network connection
	E0708 19:58:00.809482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45762: use of closed network connection
	E0708 19:58:01.001441       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45780: use of closed network connection
	E0708 19:58:01.193852       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45796: use of closed network connection
	E0708 19:58:01.376713       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45806: use of closed network connection
	E0708 19:58:01.666045       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45838: use of closed network connection
	E0708 19:58:01.847636       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45852: use of closed network connection
	E0708 19:58:02.039611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45856: use of closed network connection
	E0708 19:58:02.221519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45884: use of closed network connection
	E0708 19:58:02.420192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45900: use of closed network connection
	E0708 19:58:02.595747       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45906: use of closed network connection
	W0708 19:59:17.760184       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.33 192.168.39.70]
	
	
	==> kube-controller-manager [0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67] <==
	I0708 19:57:33.883717       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-511021-m03" podCIDRs=["10.244.2.0/24"]
	I0708 19:57:36.544686       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-511021-m03"
	I0708 19:57:56.741579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.759013ms"
	I0708 19:57:56.767073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.366792ms"
	I0708 19:57:56.849489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.344716ms"
	I0708 19:57:57.065957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="216.268114ms"
	I0708 19:57:57.153949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.932308ms"
	I0708 19:57:57.249947       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.929148ms"
	E0708 19:57:57.250162       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0708 19:57:57.311962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.620171ms"
	I0708 19:57:57.312087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.723µs"
	I0708 19:57:58.652325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.251102ms"
	I0708 19:57:58.652594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.27µs"
	I0708 19:57:59.105148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.860931ms"
	I0708 19:57:59.105270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.396µs"
	I0708 19:57:59.363448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.784415ms"
	I0708 19:57:59.363654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.163µs"
	E0708 19:58:33.825591       1 certificate_controller.go:146] Sync csr-v8ghx failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-v8ghx": the object has been modified; please apply your changes to the latest version and try again
	I0708 19:58:33.945736       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-511021-m04\" does not exist"
	I0708 19:58:34.128845       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-511021-m04" podCIDRs=["10.244.3.0/24"]
	I0708 19:58:36.607656       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-511021-m04"
	I0708 19:58:42.501604       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-511021-m04"
	I0708 19:59:36.631244       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-511021-m04"
	I0708 19:59:36.776634       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.391327ms"
	I0708 19:59:36.776876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.037µs"
	
	
	==> kube-proxy [67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19] <==
	I0708 19:55:32.852876       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:55:32.874081       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.33"]
	I0708 19:55:32.914145       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:55:32.914257       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:55:32.914291       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:55:32.917559       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:55:32.917764       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:55:32.918008       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:55:32.920064       1 config.go:192] "Starting service config controller"
	I0708 19:55:32.920133       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:55:32.920176       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:55:32.920192       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:55:32.920779       1 config.go:319] "Starting node config controller"
	I0708 19:55:32.920927       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:55:33.020536       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:55:33.020597       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:55:33.021000       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9] <==
	E0708 19:55:17.153223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 19:55:17.257366       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 19:55:17.257414       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 19:55:17.314276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 19:55:17.314328       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0708 19:55:19.466683       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0708 19:57:33.939776       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kfpzq\": pod kindnet-kfpzq is already assigned to node \"ha-511021-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-kfpzq" node="ha-511021-m03"
	E0708 19:57:33.940071       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8400c214-1e12-4869-9d9f-c8d872e29156(kube-system/kindnet-kfpzq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kfpzq"
	E0708 19:57:33.940108       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kfpzq\": pod kindnet-kfpzq is already assigned to node \"ha-511021-m03\"" pod="kube-system/kindnet-kfpzq"
	I0708 19:57:33.940158       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kfpzq" node="ha-511021-m03"
	E0708 19:57:33.956776       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-scxw5\": pod kube-proxy-scxw5 is already assigned to node \"ha-511021-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-scxw5" node="ha-511021-m03"
	E0708 19:57:33.956917       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6a01e530-81f0-495a-a9a3-576ef3b0de36(kube-system/kube-proxy-scxw5) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-scxw5"
	E0708 19:57:33.956939       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-scxw5\": pod kube-proxy-scxw5 is already assigned to node \"ha-511021-m03\"" pod="kube-system/kube-proxy-scxw5"
	I0708 19:57:33.957127       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-scxw5" node="ha-511021-m03"
	I0708 19:57:56.702453       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="993a3e9e-2fe3-41de-9bc1-b98386749da9" pod="default/busybox-fc5497c4f-x9p75" assumedNode="ha-511021-m03" currentNode="ha-511021-m02"
	E0708 19:57:56.713330       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-x9p75\": pod busybox-fc5497c4f-x9p75 is already assigned to node \"ha-511021-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-x9p75" node="ha-511021-m02"
	E0708 19:57:56.713407       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 993a3e9e-2fe3-41de-9bc1-b98386749da9(default/busybox-fc5497c4f-x9p75) was assumed on ha-511021-m02 but assigned to ha-511021-m03" pod="default/busybox-fc5497c4f-x9p75"
	E0708 19:57:56.713625       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-x9p75\": pod busybox-fc5497c4f-x9p75 is already assigned to node \"ha-511021-m03\"" pod="default/busybox-fc5497c4f-x9p75"
	I0708 19:57:56.713692       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-x9p75" node="ha-511021-m03"
	E0708 19:57:56.750725       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-w8l78\": pod busybox-fc5497c4f-w8l78 is already assigned to node \"ha-511021\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-w8l78" node="ha-511021"
	E0708 19:57:56.750928       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0dc81a07-5014-49b4-9c2f-e1806d1705e3(default/busybox-fc5497c4f-w8l78) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-w8l78"
	E0708 19:57:56.750955       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-w8l78\": pod busybox-fc5497c4f-w8l78 is already assigned to node \"ha-511021\"" pod="default/busybox-fc5497c4f-w8l78"
	I0708 19:57:56.750975       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-w8l78" node="ha-511021"
	E0708 19:58:34.168305       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7mb58\": pod kube-proxy-7mb58 is already assigned to node \"ha-511021-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7mb58" node="ha-511021-m04"
	E0708 19:58:34.168419       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7mb58\": pod kube-proxy-7mb58 is already assigned to node \"ha-511021-m04\"" pod="kube-system/kube-proxy-7mb58"
	
	
	==> kubelet <==
	Jul 08 19:57:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:57:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:57:56 ha-511021 kubelet[1369]: I0708 19:57:56.735361    1369 topology_manager.go:215] "Topology Admit Handler" podUID="0dc81a07-5014-49b4-9c2f-e1806d1705e3" podNamespace="default" podName="busybox-fc5497c4f-w8l78"
	Jul 08 19:57:56 ha-511021 kubelet[1369]: I0708 19:57:56.796637    1369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c25b9\" (UniqueName: \"kubernetes.io/projected/0dc81a07-5014-49b4-9c2f-e1806d1705e3-kube-api-access-c25b9\") pod \"busybox-fc5497c4f-w8l78\" (UID: \"0dc81a07-5014-49b4-9c2f-e1806d1705e3\") " pod="default/busybox-fc5497c4f-w8l78"
	Jul 08 19:57:58 ha-511021 kubelet[1369]: I0708 19:57:58.639143    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-w8l78" podStartSLOduration=1.754375308 podStartE2EDuration="2.639072715s" podCreationTimestamp="2024-07-08 19:57:56 +0000 UTC" firstStartedPulling="2024-07-08 19:57:57.403273688 +0000 UTC m=+158.729928017" lastFinishedPulling="2024-07-08 19:57:58.287971096 +0000 UTC m=+159.614625424" observedRunningTime="2024-07-08 19:57:58.638464065 +0000 UTC m=+159.965118413" watchObservedRunningTime="2024-07-08 19:57:58.639072715 +0000 UTC m=+159.965727066"
	Jul 08 19:58:18 ha-511021 kubelet[1369]: E0708 19:58:18.947235    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:58:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:58:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:58:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:58:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:59:18 ha-511021 kubelet[1369]: E0708 19:59:18.960448    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:59:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:59:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:59:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:59:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 20:00:18 ha-511021 kubelet[1369]: E0708 20:00:18.946966    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:00:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:00:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:00:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:00:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 20:01:18 ha-511021 kubelet[1369]: E0708 20:01:18.948413    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:01:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:01:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:01:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:01:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-511021 -n ha-511021
helpers_test.go:261: (dbg) Run:  kubectl --context ha-511021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
E0708 20:01:29.733380   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr: exit status 3 (3.204955178s)

                                                
                                                
-- stdout --
	ha-511021
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-511021-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:01:28.462176   30435 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:01:28.462291   30435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:28.462301   30435 out.go:304] Setting ErrFile to fd 2...
	I0708 20:01:28.462308   30435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:28.462530   30435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:01:28.462684   30435 out.go:298] Setting JSON to false
	I0708 20:01:28.462708   30435 mustload.go:65] Loading cluster: ha-511021
	I0708 20:01:28.462744   30435 notify.go:220] Checking for updates...
	I0708 20:01:28.463060   30435 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:01:28.463073   30435 status.go:255] checking status of ha-511021 ...
	I0708 20:01:28.463532   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:28.463578   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:28.482172   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42127
	I0708 20:01:28.482637   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:28.483267   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:28.483293   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:28.483655   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:28.483872   30435 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:01:28.485500   30435 status.go:330] ha-511021 host status = "Running" (err=<nil>)
	I0708 20:01:28.485520   30435 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:28.485806   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:28.485842   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:28.501154   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0708 20:01:28.501510   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:28.501976   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:28.502000   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:28.502279   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:28.502485   30435 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:01:28.505096   30435 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:28.505434   30435 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:28.505461   30435 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:28.505617   30435 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:28.505925   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:28.505958   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:28.521806   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44255
	I0708 20:01:28.522231   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:28.522695   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:28.522720   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:28.523016   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:28.523318   30435 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:01:28.523553   30435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:28.523582   30435 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:01:28.526386   30435 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:28.526828   30435 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:28.526850   30435 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:28.527029   30435 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:01:28.527212   30435 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:01:28.527355   30435 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:01:28.527534   30435 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:01:28.606822   30435 ssh_runner.go:195] Run: systemctl --version
	I0708 20:01:28.613180   30435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:28.628362   30435 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:28.628392   30435 api_server.go:166] Checking apiserver status ...
	I0708 20:01:28.628424   30435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:28.643983   30435 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0708 20:01:28.655541   30435 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:28.655592   30435 ssh_runner.go:195] Run: ls
	I0708 20:01:28.660883   30435 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:28.667096   30435 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:28.667122   30435 status.go:422] ha-511021 apiserver status = Running (err=<nil>)
	I0708 20:01:28.667133   30435 status.go:257] ha-511021 status: &{Name:ha-511021 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:28.667163   30435 status.go:255] checking status of ha-511021-m02 ...
	I0708 20:01:28.667579   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:28.667623   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:28.684123   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40551
	I0708 20:01:28.684631   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:28.685165   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:28.685192   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:28.685496   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:28.685700   30435 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 20:01:28.687267   30435 status.go:330] ha-511021-m02 host status = "Running" (err=<nil>)
	I0708 20:01:28.687283   30435 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:28.687649   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:28.687689   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:28.701474   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0708 20:01:28.701922   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:28.702454   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:28.702475   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:28.702755   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:28.702906   30435 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 20:01:28.705683   30435 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:28.706147   30435 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:28.706172   30435 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:28.706355   30435 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:28.706650   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:28.706692   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:28.721192   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0708 20:01:28.721626   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:28.722051   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:28.722070   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:28.722426   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:28.722609   30435 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 20:01:28.722795   30435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:28.722811   30435 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 20:01:28.725998   30435 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:28.726471   30435 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:28.726503   30435 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:28.726648   30435 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 20:01:28.726833   30435 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 20:01:28.726996   30435 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 20:01:28.727169   30435 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	W0708 20:01:31.267799   30435 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.216:22: connect: no route to host
	W0708 20:01:31.267884   30435 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	E0708 20:01:31.267906   30435 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:31.267918   30435 status.go:257] ha-511021-m02 status: &{Name:ha-511021-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0708 20:01:31.267941   30435 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:31.267955   30435 status.go:255] checking status of ha-511021-m03 ...
	I0708 20:01:31.268413   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:31.268470   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:31.283973   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I0708 20:01:31.284381   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:31.284846   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:31.284866   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:31.285175   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:31.285359   30435 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 20:01:31.286745   30435 status.go:330] ha-511021-m03 host status = "Running" (err=<nil>)
	I0708 20:01:31.286762   30435 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:31.287149   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:31.287190   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:31.302002   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35537
	I0708 20:01:31.302474   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:31.302978   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:31.303002   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:31.303283   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:31.303441   30435 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 20:01:31.306227   30435 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:31.306593   30435 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:31.306615   30435 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:31.306762   30435 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:31.307120   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:31.307153   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:31.322004   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0708 20:01:31.322406   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:31.322865   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:31.322894   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:31.323202   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:31.323395   30435 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 20:01:31.323588   30435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:31.323611   30435 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 20:01:31.326288   30435 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:31.326693   30435 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:31.326733   30435 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:31.326826   30435 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 20:01:31.326996   30435 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 20:01:31.327151   30435 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 20:01:31.327274   30435 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 20:01:31.411625   30435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:31.428025   30435 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:31.428064   30435 api_server.go:166] Checking apiserver status ...
	I0708 20:01:31.428105   30435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:31.443936   30435 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0708 20:01:31.455417   30435 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:31.455492   30435 ssh_runner.go:195] Run: ls
	I0708 20:01:31.460471   30435 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:31.465057   30435 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:31.465085   30435 status.go:422] ha-511021-m03 apiserver status = Running (err=<nil>)
	I0708 20:01:31.465096   30435 status.go:257] ha-511021-m03 status: &{Name:ha-511021-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:31.465124   30435 status.go:255] checking status of ha-511021-m04 ...
	I0708 20:01:31.465431   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:31.465475   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:31.480553   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0708 20:01:31.480955   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:31.481489   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:31.481517   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:31.481877   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:31.482110   30435 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:01:31.483809   30435 status.go:330] ha-511021-m04 host status = "Running" (err=<nil>)
	I0708 20:01:31.483828   30435 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:31.484123   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:31.484181   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:31.499496   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0708 20:01:31.500005   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:31.500530   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:31.500553   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:31.500836   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:31.500997   30435 main.go:141] libmachine: (ha-511021-m04) Calling .GetIP
	I0708 20:01:31.503986   30435 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:31.504506   30435 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:31.504546   30435 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:31.504698   30435 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:31.505032   30435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:31.505095   30435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:31.524151   30435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0708 20:01:31.524542   30435 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:31.525058   30435 main.go:141] libmachine: Using API Version  1
	I0708 20:01:31.525084   30435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:31.525431   30435 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:31.525607   30435 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:01:31.525803   30435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:31.525836   30435 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:01:31.528612   30435 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:31.529082   30435 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:31.529114   30435 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:31.529316   30435 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:01:31.529464   30435 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:01:31.529555   30435 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:01:31.529684   30435 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	I0708 20:01:31.611127   30435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:31.625811   30435 status.go:257] ha-511021-m04 status: &{Name:ha-511021-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr: exit status 3 (4.920443335s)

                                                
                                                
-- stdout --
	ha-511021
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-511021-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:01:32.890360   30536 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:01:32.890452   30536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:32.890459   30536 out.go:304] Setting ErrFile to fd 2...
	I0708 20:01:32.890463   30536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:32.890642   30536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:01:32.890797   30536 out.go:298] Setting JSON to false
	I0708 20:01:32.890829   30536 mustload.go:65] Loading cluster: ha-511021
	I0708 20:01:32.890923   30536 notify.go:220] Checking for updates...
	I0708 20:01:32.891288   30536 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:01:32.891309   30536 status.go:255] checking status of ha-511021 ...
	I0708 20:01:32.891842   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:32.891881   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:32.909901   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I0708 20:01:32.910419   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:32.910984   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:32.911053   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:32.911673   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:32.911909   30536 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:01:32.913620   30536 status.go:330] ha-511021 host status = "Running" (err=<nil>)
	I0708 20:01:32.913637   30536 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:32.913951   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:32.913986   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:32.930271   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I0708 20:01:32.930776   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:32.931278   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:32.931300   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:32.931635   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:32.931827   30536 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:01:32.934619   30536 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:32.935006   30536 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:32.935026   30536 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:32.935192   30536 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:32.935507   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:32.935545   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:32.950927   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0708 20:01:32.951467   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:32.951986   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:32.952010   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:32.952329   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:32.952519   30536 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:01:32.952722   30536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:32.952746   30536 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:01:32.956079   30536 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:32.956502   30536 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:32.956525   30536 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:32.956820   30536 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:01:32.956988   30536 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:01:32.957233   30536 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:01:32.957396   30536 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:01:33.044057   30536 ssh_runner.go:195] Run: systemctl --version
	I0708 20:01:33.050882   30536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:33.069684   30536 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:33.069719   30536 api_server.go:166] Checking apiserver status ...
	I0708 20:01:33.069761   30536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:33.086719   30536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0708 20:01:33.097152   30536 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:33.097204   30536 ssh_runner.go:195] Run: ls
	I0708 20:01:33.102216   30536 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:33.106555   30536 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:33.106578   30536 status.go:422] ha-511021 apiserver status = Running (err=<nil>)
	I0708 20:01:33.106590   30536 status.go:257] ha-511021 status: &{Name:ha-511021 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:33.106611   30536 status.go:255] checking status of ha-511021-m02 ...
	I0708 20:01:33.106919   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:33.106952   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:33.122250   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0708 20:01:33.122809   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:33.123307   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:33.123328   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:33.123659   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:33.123824   30536 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 20:01:33.125332   30536 status.go:330] ha-511021-m02 host status = "Running" (err=<nil>)
	I0708 20:01:33.125344   30536 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:33.125611   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:33.125641   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:33.141939   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33005
	I0708 20:01:33.142374   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:33.142812   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:33.142835   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:33.143201   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:33.143435   30536 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 20:01:33.146063   30536 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:33.146479   30536 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:33.146507   30536 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:33.146642   30536 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:33.146963   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:33.146998   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:33.162437   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40805
	I0708 20:01:33.162857   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:33.163291   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:33.163309   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:33.163625   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:33.163843   30536 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 20:01:33.164007   30536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:33.164027   30536 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 20:01:33.166960   30536 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:33.167423   30536 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:33.167484   30536 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:33.167629   30536 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 20:01:33.167790   30536 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 20:01:33.167948   30536 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 20:01:33.168124   30536 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	W0708 20:01:34.343755   30536 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:34.343804   30536 retry.go:31] will retry after 339.421078ms: dial tcp 192.168.39.216:22: connect: no route to host
	W0708 20:01:37.411687   30536 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.216:22: connect: no route to host
	W0708 20:01:37.411764   30536 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	E0708 20:01:37.411780   30536 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:37.411789   30536 status.go:257] ha-511021-m02 status: &{Name:ha-511021-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0708 20:01:37.411817   30536 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:37.411826   30536 status.go:255] checking status of ha-511021-m03 ...
	I0708 20:01:37.412134   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:37.412183   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:37.426796   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0708 20:01:37.427220   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:37.427760   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:37.427786   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:37.428146   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:37.428378   30536 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 20:01:37.430003   30536 status.go:330] ha-511021-m03 host status = "Running" (err=<nil>)
	I0708 20:01:37.430017   30536 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:37.430441   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:37.430482   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:37.445924   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I0708 20:01:37.446335   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:37.446775   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:37.446798   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:37.447147   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:37.447300   30536 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 20:01:37.450151   30536 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:37.450567   30536 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:37.450586   30536 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:37.450730   30536 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:37.451281   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:37.451326   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:37.465827   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38001
	I0708 20:01:37.466263   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:37.466736   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:37.466772   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:37.467107   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:37.467277   30536 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 20:01:37.467523   30536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:37.467559   30536 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 20:01:37.470309   30536 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:37.470702   30536 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:37.470726   30536 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:37.470853   30536 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 20:01:37.471015   30536 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 20:01:37.471207   30536 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 20:01:37.471346   30536 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 20:01:37.557113   30536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:37.573363   30536 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:37.573391   30536 api_server.go:166] Checking apiserver status ...
	I0708 20:01:37.573441   30536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:37.589477   30536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0708 20:01:37.600499   30536 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:37.600550   30536 ssh_runner.go:195] Run: ls
	I0708 20:01:37.604990   30536 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:37.610954   30536 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:37.610976   30536 status.go:422] ha-511021-m03 apiserver status = Running (err=<nil>)
	I0708 20:01:37.610984   30536 status.go:257] ha-511021-m03 status: &{Name:ha-511021-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:37.610998   30536 status.go:255] checking status of ha-511021-m04 ...
	I0708 20:01:37.611307   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:37.611341   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:37.627493   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35299
	I0708 20:01:37.627930   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:37.628439   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:37.628458   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:37.628814   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:37.629030   30536 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:01:37.630585   30536 status.go:330] ha-511021-m04 host status = "Running" (err=<nil>)
	I0708 20:01:37.630601   30536 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:37.630873   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:37.630929   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:37.645983   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0708 20:01:37.646377   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:37.646864   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:37.646891   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:37.647177   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:37.647387   30536 main.go:141] libmachine: (ha-511021-m04) Calling .GetIP
	I0708 20:01:37.650139   30536 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:37.650544   30536 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:37.650576   30536 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:37.650754   30536 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:37.651156   30536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:37.651204   30536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:37.666382   30536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38613
	I0708 20:01:37.666749   30536 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:37.667209   30536 main.go:141] libmachine: Using API Version  1
	I0708 20:01:37.667228   30536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:37.667584   30536 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:37.667762   30536 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:01:37.667924   30536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:37.667946   30536 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:01:37.670826   30536 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:37.671186   30536 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:37.671222   30536 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:37.671371   30536 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:01:37.671558   30536 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:01:37.671693   30536 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:01:37.671791   30536 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	I0708 20:01:37.754823   30536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:37.768762   30536 status.go:257] ha-511021-m04 status: &{Name:ha-511021-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr: exit status 3 (4.362700371s)

                                                
                                                
-- stdout --
	ha-511021
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-511021-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:01:39.757887   30638 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:01:39.757988   30638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:39.757997   30638 out.go:304] Setting ErrFile to fd 2...
	I0708 20:01:39.758002   30638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:39.758170   30638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:01:39.758320   30638 out.go:298] Setting JSON to false
	I0708 20:01:39.758344   30638 mustload.go:65] Loading cluster: ha-511021
	I0708 20:01:39.758434   30638 notify.go:220] Checking for updates...
	I0708 20:01:39.758732   30638 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:01:39.758747   30638 status.go:255] checking status of ha-511021 ...
	I0708 20:01:39.759156   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:39.759252   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:39.777068   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45471
	I0708 20:01:39.777515   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:39.778173   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:39.778198   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:39.778590   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:39.778832   30638 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:01:39.780360   30638 status.go:330] ha-511021 host status = "Running" (err=<nil>)
	I0708 20:01:39.780375   30638 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:39.780665   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:39.780718   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:39.795716   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0708 20:01:39.796187   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:39.796689   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:39.796710   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:39.797181   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:39.797384   30638 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:01:39.799999   30638 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:39.800434   30638 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:39.800458   30638 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:39.800558   30638 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:39.800855   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:39.800895   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:39.815525   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I0708 20:01:39.816047   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:39.816594   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:39.816616   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:39.816938   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:39.817130   30638 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:01:39.817338   30638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:39.817363   30638 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:01:39.820311   30638 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:39.820780   30638 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:39.820809   30638 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:39.820951   30638 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:01:39.821124   30638 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:01:39.821290   30638 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:01:39.821434   30638 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:01:39.908350   30638 ssh_runner.go:195] Run: systemctl --version
	I0708 20:01:39.914888   30638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:39.929913   30638 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:39.929949   30638 api_server.go:166] Checking apiserver status ...
	I0708 20:01:39.929989   30638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:39.944411   30638 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0708 20:01:39.960133   30638 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:39.960196   30638 ssh_runner.go:195] Run: ls
	I0708 20:01:39.965260   30638 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:39.971233   30638 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:39.971259   30638 status.go:422] ha-511021 apiserver status = Running (err=<nil>)
	I0708 20:01:39.971271   30638 status.go:257] ha-511021 status: &{Name:ha-511021 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:39.971290   30638 status.go:255] checking status of ha-511021-m02 ...
	I0708 20:01:39.971631   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:39.971683   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:39.986254   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0708 20:01:39.986632   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:39.987132   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:39.987165   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:39.987484   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:39.987653   30638 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 20:01:39.989404   30638 status.go:330] ha-511021-m02 host status = "Running" (err=<nil>)
	I0708 20:01:39.989419   30638 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:39.989726   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:39.989759   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:40.004454   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37415
	I0708 20:01:40.004908   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:40.005389   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:40.005413   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:40.005835   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:40.006016   30638 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 20:01:40.008907   30638 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:40.009350   30638 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:40.009376   30638 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:40.009475   30638 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:40.010053   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:40.010099   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:40.025876   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43583
	I0708 20:01:40.026286   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:40.026705   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:40.026728   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:40.027031   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:40.027247   30638 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 20:01:40.027431   30638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:40.027471   30638 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 20:01:40.029856   30638 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:40.030318   30638 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:40.030342   30638 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:40.030467   30638 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 20:01:40.030614   30638 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 20:01:40.030754   30638 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 20:01:40.030935   30638 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	W0708 20:01:40.483704   30638 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:40.483745   30638 retry.go:31] will retry after 178.849514ms: dial tcp 192.168.39.216:22: connect: no route to host
	W0708 20:01:43.719699   30638 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.216:22: connect: no route to host
	W0708 20:01:43.719780   30638 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	E0708 20:01:43.719798   30638 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:43.719808   30638 status.go:257] ha-511021-m02 status: &{Name:ha-511021-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0708 20:01:43.719835   30638 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:43.719846   30638 status.go:255] checking status of ha-511021-m03 ...
	I0708 20:01:43.720240   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:43.720302   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:43.734967   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0708 20:01:43.735389   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:43.735903   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:43.735934   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:43.736244   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:43.736433   30638 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 20:01:43.737992   30638 status.go:330] ha-511021-m03 host status = "Running" (err=<nil>)
	I0708 20:01:43.738009   30638 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:43.738298   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:43.738338   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:43.752683   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0708 20:01:43.753057   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:43.753493   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:43.753519   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:43.753808   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:43.753988   30638 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 20:01:43.756829   30638 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:43.757226   30638 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:43.757257   30638 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:43.757388   30638 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:43.757780   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:43.757812   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:43.772234   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0708 20:01:43.772586   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:43.773039   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:43.773060   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:43.773366   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:43.773534   30638 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 20:01:43.773682   30638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:43.773699   30638 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 20:01:43.776130   30638 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:43.776497   30638 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:43.776520   30638 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:43.776687   30638 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 20:01:43.776824   30638 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 20:01:43.776960   30638 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 20:01:43.777115   30638 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 20:01:43.864218   30638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:43.880723   30638 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:43.880749   30638 api_server.go:166] Checking apiserver status ...
	I0708 20:01:43.880779   30638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:43.898992   30638 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0708 20:01:43.909250   30638 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:43.909325   30638 ssh_runner.go:195] Run: ls
	I0708 20:01:43.914611   30638 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:43.921530   30638 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:43.921557   30638 status.go:422] ha-511021-m03 apiserver status = Running (err=<nil>)
	I0708 20:01:43.921566   30638 status.go:257] ha-511021-m03 status: &{Name:ha-511021-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:43.921584   30638 status.go:255] checking status of ha-511021-m04 ...
	I0708 20:01:43.921892   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:43.921933   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:43.937058   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45889
	I0708 20:01:43.937510   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:43.938002   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:43.938029   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:43.938373   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:43.938651   30638 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:01:43.940299   30638 status.go:330] ha-511021-m04 host status = "Running" (err=<nil>)
	I0708 20:01:43.940323   30638 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:43.940666   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:43.940703   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:43.956175   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33551
	I0708 20:01:43.956656   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:43.957178   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:43.957200   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:43.957548   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:43.957744   30638 main.go:141] libmachine: (ha-511021-m04) Calling .GetIP
	I0708 20:01:43.960385   30638 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:43.960787   30638 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:43.960825   30638 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:43.960974   30638 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:43.961329   30638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:43.961368   30638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:43.977053   30638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I0708 20:01:43.977463   30638 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:43.977915   30638 main.go:141] libmachine: Using API Version  1
	I0708 20:01:43.977933   30638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:43.978256   30638 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:43.978472   30638 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:01:43.978645   30638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:43.978666   30638 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:01:43.981659   30638 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:43.982062   30638 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:43.982088   30638 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:43.982295   30638 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:01:43.982505   30638 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:01:43.982663   30638 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:01:43.982791   30638 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	I0708 20:01:44.062827   30638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:44.077262   30638 status.go:257] ha-511021-m04 status: &{Name:ha-511021-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr: exit status 3 (4.288499914s)

                                                
                                                
-- stdout --
	ha-511021
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-511021-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:01:46.107340   30739 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:01:46.107487   30739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:46.107769   30739 out.go:304] Setting ErrFile to fd 2...
	I0708 20:01:46.107789   30739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:46.108390   30739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:01:46.108660   30739 out.go:298] Setting JSON to false
	I0708 20:01:46.108694   30739 mustload.go:65] Loading cluster: ha-511021
	I0708 20:01:46.108793   30739 notify.go:220] Checking for updates...
	I0708 20:01:46.109237   30739 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:01:46.109259   30739 status.go:255] checking status of ha-511021 ...
	I0708 20:01:46.109885   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:46.109953   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:46.124723   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43651
	I0708 20:01:46.125232   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:46.125818   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:46.125845   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:46.126217   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:46.126391   30739 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:01:46.127918   30739 status.go:330] ha-511021 host status = "Running" (err=<nil>)
	I0708 20:01:46.127936   30739 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:46.128332   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:46.128379   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:46.142831   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I0708 20:01:46.143190   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:46.143644   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:46.143664   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:46.143970   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:46.144186   30739 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:01:46.147011   30739 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:46.147444   30739 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:46.147495   30739 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:46.147678   30739 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:46.147972   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:46.148004   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:46.163907   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46199
	I0708 20:01:46.164323   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:46.164773   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:46.164792   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:46.165137   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:46.165326   30739 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:01:46.165515   30739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:46.165539   30739 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:01:46.168032   30739 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:46.168433   30739 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:46.168470   30739 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:46.168604   30739 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:01:46.168914   30739 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:01:46.169096   30739 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:01:46.169250   30739 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:01:46.255366   30739 ssh_runner.go:195] Run: systemctl --version
	I0708 20:01:46.267851   30739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:46.283345   30739 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:46.283378   30739 api_server.go:166] Checking apiserver status ...
	I0708 20:01:46.283410   30739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:46.299537   30739 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0708 20:01:46.310705   30739 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:46.310780   30739 ssh_runner.go:195] Run: ls
	I0708 20:01:46.315537   30739 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:46.319298   30739 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:46.319317   30739 status.go:422] ha-511021 apiserver status = Running (err=<nil>)
	I0708 20:01:46.319329   30739 status.go:257] ha-511021 status: &{Name:ha-511021 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:46.319347   30739 status.go:255] checking status of ha-511021-m02 ...
	I0708 20:01:46.319740   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:46.319786   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:46.334491   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I0708 20:01:46.334875   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:46.335438   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:46.335490   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:46.335826   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:46.336017   30739 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 20:01:46.337609   30739 status.go:330] ha-511021-m02 host status = "Running" (err=<nil>)
	I0708 20:01:46.337625   30739 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:46.338062   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:46.338105   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:46.352833   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
	I0708 20:01:46.353265   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:46.353688   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:46.353710   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:46.354015   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:46.354248   30739 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 20:01:46.356923   30739 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:46.357386   30739 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:46.357438   30739 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:46.357526   30739 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:46.357907   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:46.357948   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:46.373361   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I0708 20:01:46.373750   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:46.374201   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:46.374221   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:46.374520   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:46.374790   30739 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 20:01:46.374961   30739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:46.374982   30739 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 20:01:46.377576   30739 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:46.377916   30739 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:46.377938   30739 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:46.378096   30739 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 20:01:46.378282   30739 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 20:01:46.378474   30739 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 20:01:46.378639   30739 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	W0708 20:01:46.791639   30739 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:46.791682   30739 retry.go:31] will retry after 143.298492ms: dial tcp 192.168.39.216:22: connect: no route to host
	W0708 20:01:49.991688   30739 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.216:22: connect: no route to host
	W0708 20:01:49.991794   30739 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	E0708 20:01:49.991813   30739 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:49.991822   30739 status.go:257] ha-511021-m02 status: &{Name:ha-511021-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0708 20:01:49.991842   30739 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:49.991854   30739 status.go:255] checking status of ha-511021-m03 ...
	I0708 20:01:49.992217   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:49.992267   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:50.007569   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34757
	I0708 20:01:50.008065   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:50.008505   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:50.008531   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:50.008897   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:50.009128   30739 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 20:01:50.010893   30739 status.go:330] ha-511021-m03 host status = "Running" (err=<nil>)
	I0708 20:01:50.010906   30739 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:50.011182   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:50.011222   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:50.025973   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45131
	I0708 20:01:50.026379   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:50.026855   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:50.026881   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:50.027195   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:50.027402   30739 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 20:01:50.030253   30739 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:50.030685   30739 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:50.030707   30739 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:50.030858   30739 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:50.031154   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:50.031194   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:50.045873   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0708 20:01:50.046275   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:50.046687   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:50.046707   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:50.047021   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:50.047290   30739 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 20:01:50.047676   30739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:50.047703   30739 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 20:01:50.050958   30739 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:50.051481   30739 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:50.051519   30739 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:50.051692   30739 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 20:01:50.051876   30739 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 20:01:50.052051   30739 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 20:01:50.052168   30739 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 20:01:50.139576   30739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:50.154487   30739 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:50.154512   30739 api_server.go:166] Checking apiserver status ...
	I0708 20:01:50.154543   30739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:50.171561   30739 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0708 20:01:50.181660   30739 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:50.181722   30739 ssh_runner.go:195] Run: ls
	I0708 20:01:50.186940   30739 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:50.193183   30739 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:50.193205   30739 status.go:422] ha-511021-m03 apiserver status = Running (err=<nil>)
	I0708 20:01:50.193213   30739 status.go:257] ha-511021-m03 status: &{Name:ha-511021-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:50.193244   30739 status.go:255] checking status of ha-511021-m04 ...
	I0708 20:01:50.193520   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:50.193556   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:50.209272   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0708 20:01:50.209656   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:50.210155   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:50.210203   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:50.210545   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:50.210781   30739 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:01:50.212154   30739 status.go:330] ha-511021-m04 host status = "Running" (err=<nil>)
	I0708 20:01:50.212168   30739 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:50.212478   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:50.212512   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:50.227242   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I0708 20:01:50.227680   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:50.228205   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:50.228229   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:50.228529   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:50.228696   30739 main.go:141] libmachine: (ha-511021-m04) Calling .GetIP
	I0708 20:01:50.231607   30739 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:50.232072   30739 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:50.232111   30739 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:50.232204   30739 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:50.232516   30739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:50.232546   30739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:50.247267   30739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39419
	I0708 20:01:50.247775   30739 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:50.248226   30739 main.go:141] libmachine: Using API Version  1
	I0708 20:01:50.248245   30739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:50.248532   30739 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:50.248722   30739 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:01:50.248952   30739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:50.248972   30739 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:01:50.251972   30739 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:50.252292   30739 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:50.252320   30739 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:50.252438   30739 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:01:50.252629   30739 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:01:50.252797   30739 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:01:50.252954   30739 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	I0708 20:01:50.338856   30739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:50.353385   30739 status.go:257] ha-511021-m04 status: &{Name:ha-511021-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr: exit status 3 (3.731817998s)

                                                
                                                
-- stdout --
	ha-511021
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-511021-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:01:54.927043   30855 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:01:54.927189   30855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:54.927199   30855 out.go:304] Setting ErrFile to fd 2...
	I0708 20:01:54.927204   30855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:01:54.927391   30855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:01:54.927572   30855 out.go:298] Setting JSON to false
	I0708 20:01:54.927598   30855 mustload.go:65] Loading cluster: ha-511021
	I0708 20:01:54.927638   30855 notify.go:220] Checking for updates...
	I0708 20:01:54.927968   30855 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:01:54.927982   30855 status.go:255] checking status of ha-511021 ...
	I0708 20:01:54.928471   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:54.928543   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:54.947470   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0708 20:01:54.948058   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:54.948763   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:54.948873   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:54.949177   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:54.949331   30855 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:01:54.950815   30855 status.go:330] ha-511021 host status = "Running" (err=<nil>)
	I0708 20:01:54.950832   30855 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:54.951117   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:54.951173   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:54.966014   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0708 20:01:54.966418   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:54.966906   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:54.966929   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:54.967268   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:54.967480   30855 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:01:54.970070   30855 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:54.970696   30855 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:54.970725   30855 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:54.970771   30855 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:01:54.971056   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:54.971107   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:54.986016   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0708 20:01:54.986412   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:54.986837   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:54.986857   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:54.987177   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:54.987396   30855 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:01:54.987627   30855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:54.987655   30855 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:01:54.990460   30855 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:54.990819   30855 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:01:54.990845   30855 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:01:54.990987   30855 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:01:54.991162   30855 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:01:54.991323   30855 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:01:54.991484   30855 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:01:55.072175   30855 ssh_runner.go:195] Run: systemctl --version
	I0708 20:01:55.078888   30855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:55.094311   30855 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:55.094340   30855 api_server.go:166] Checking apiserver status ...
	I0708 20:01:55.094374   30855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:55.109044   30855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0708 20:01:55.119219   30855 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:55.119279   30855 ssh_runner.go:195] Run: ls
	I0708 20:01:55.124343   30855 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:55.128489   30855 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:55.128511   30855 status.go:422] ha-511021 apiserver status = Running (err=<nil>)
	I0708 20:01:55.128521   30855 status.go:257] ha-511021 status: &{Name:ha-511021 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:55.128536   30855 status.go:255] checking status of ha-511021-m02 ...
	I0708 20:01:55.128831   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:55.128877   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:55.145375   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46155
	I0708 20:01:55.145821   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:55.146267   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:55.146292   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:55.146620   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:55.146791   30855 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 20:01:55.148241   30855 status.go:330] ha-511021-m02 host status = "Running" (err=<nil>)
	I0708 20:01:55.148258   30855 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:55.148555   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:55.148598   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:55.163929   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33183
	I0708 20:01:55.164362   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:55.164864   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:55.164893   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:55.165272   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:55.165428   30855 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 20:01:55.168261   30855 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:55.168729   30855 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:55.168756   30855 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:55.168878   30855 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:01:55.169175   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:55.169214   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:55.184047   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I0708 20:01:55.184418   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:55.184942   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:55.184966   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:55.185260   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:55.185445   30855 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 20:01:55.185659   30855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:55.185683   30855 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 20:01:55.188499   30855 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:55.188892   30855 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:01:55.188925   30855 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:01:55.189015   30855 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 20:01:55.189163   30855 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 20:01:55.189324   30855 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 20:01:55.189463   30855 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	W0708 20:01:58.243745   30855 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.216:22: connect: no route to host
	W0708 20:01:58.243864   30855 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	E0708 20:01:58.243900   30855 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:58.243910   30855 status.go:257] ha-511021-m02 status: &{Name:ha-511021-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0708 20:01:58.243927   30855 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:01:58.243935   30855 status.go:255] checking status of ha-511021-m03 ...
	I0708 20:01:58.244281   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:58.244323   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:58.259833   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I0708 20:01:58.260262   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:58.260755   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:58.260775   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:58.261057   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:58.261217   30855 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 20:01:58.262599   30855 status.go:330] ha-511021-m03 host status = "Running" (err=<nil>)
	I0708 20:01:58.262616   30855 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:58.262934   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:58.262978   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:58.277933   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I0708 20:01:58.278382   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:58.278808   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:58.278830   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:58.279138   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:58.279344   30855 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 20:01:58.282095   30855 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:58.282493   30855 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:58.282519   30855 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:58.282676   30855 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:01:58.282979   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:58.283022   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:58.298717   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0708 20:01:58.299127   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:58.299561   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:58.299580   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:58.299906   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:58.300094   30855 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 20:01:58.300261   30855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:58.300282   30855 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 20:01:58.302891   30855 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:58.303279   30855 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:01:58.303310   30855 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:01:58.303471   30855 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 20:01:58.303632   30855 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 20:01:58.303774   30855 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 20:01:58.303892   30855 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 20:01:58.391670   30855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:58.407376   30855 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:01:58.407404   30855 api_server.go:166] Checking apiserver status ...
	I0708 20:01:58.407440   30855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:01:58.426212   30855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0708 20:01:58.436727   30855 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:01:58.436781   30855 ssh_runner.go:195] Run: ls
	I0708 20:01:58.441357   30855 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:01:58.446361   30855 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:01:58.446390   30855 status.go:422] ha-511021-m03 apiserver status = Running (err=<nil>)
	I0708 20:01:58.446401   30855 status.go:257] ha-511021-m03 status: &{Name:ha-511021-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:01:58.446420   30855 status.go:255] checking status of ha-511021-m04 ...
	I0708 20:01:58.446808   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:58.446849   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:58.461914   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41963
	I0708 20:01:58.462305   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:58.462745   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:58.462767   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:58.463128   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:58.463318   30855 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:01:58.464918   30855 status.go:330] ha-511021-m04 host status = "Running" (err=<nil>)
	I0708 20:01:58.464932   30855 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:58.465203   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:58.465234   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:58.480327   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0708 20:01:58.480730   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:58.481310   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:58.481338   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:58.481635   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:58.481823   30855 main.go:141] libmachine: (ha-511021-m04) Calling .GetIP
	I0708 20:01:58.484740   30855 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:58.485140   30855 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:58.485171   30855 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:58.485290   30855 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:01:58.485677   30855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:01:58.485724   30855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:01:58.500929   30855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I0708 20:01:58.501350   30855 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:01:58.501794   30855 main.go:141] libmachine: Using API Version  1
	I0708 20:01:58.501813   30855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:01:58.502087   30855 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:01:58.502261   30855 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:01:58.502432   30855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:01:58.502457   30855 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:01:58.505083   30855 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:58.505496   30855 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:01:58.505518   30855 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:01:58.505675   30855 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:01:58.505833   30855 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:01:58.505990   30855 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:01:58.506095   30855 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	I0708 20:01:58.592204   30855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:01:58.607589   30855 status.go:257] ha-511021-m04 status: &{Name:ha-511021-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr: exit status 3 (3.728924408s)

                                                
                                                
-- stdout --
	ha-511021
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-511021-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:02:03.397307   30971 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:02:03.397426   30971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:02:03.397434   30971 out.go:304] Setting ErrFile to fd 2...
	I0708 20:02:03.397438   30971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:02:03.397612   30971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:02:03.397833   30971 out.go:298] Setting JSON to false
	I0708 20:02:03.397864   30971 mustload.go:65] Loading cluster: ha-511021
	I0708 20:02:03.397905   30971 notify.go:220] Checking for updates...
	I0708 20:02:03.398213   30971 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:02:03.398227   30971 status.go:255] checking status of ha-511021 ...
	I0708 20:02:03.398588   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:03.398652   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:03.413762   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0708 20:02:03.414207   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:03.414783   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:03.414807   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:03.415180   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:03.415404   30971 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:02:03.417007   30971 status.go:330] ha-511021 host status = "Running" (err=<nil>)
	I0708 20:02:03.417024   30971 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:02:03.417298   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:03.417340   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:03.432097   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0708 20:02:03.432520   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:03.432959   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:03.433007   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:03.433364   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:03.433565   30971 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:02:03.436589   30971 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:02:03.436969   30971 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:02:03.436993   30971 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:02:03.437167   30971 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:02:03.437476   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:03.437517   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:03.452397   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
	I0708 20:02:03.452821   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:03.453248   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:03.453269   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:03.453547   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:03.453714   30971 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:02:03.453930   30971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:02:03.453957   30971 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:02:03.456608   30971 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:02:03.457055   30971 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:02:03.457081   30971 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:02:03.457222   30971 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:02:03.457401   30971 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:02:03.457564   30971 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:02:03.457718   30971 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:02:03.544591   30971 ssh_runner.go:195] Run: systemctl --version
	I0708 20:02:03.551639   30971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:02:03.567490   30971 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:02:03.567522   30971 api_server.go:166] Checking apiserver status ...
	I0708 20:02:03.567560   30971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:02:03.583509   30971 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0708 20:02:03.595350   30971 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:02:03.595403   30971 ssh_runner.go:195] Run: ls
	I0708 20:02:03.600749   30971 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:02:03.606831   30971 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:02:03.606857   30971 status.go:422] ha-511021 apiserver status = Running (err=<nil>)
	I0708 20:02:03.606880   30971 status.go:257] ha-511021 status: &{Name:ha-511021 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:02:03.606905   30971 status.go:255] checking status of ha-511021-m02 ...
	I0708 20:02:03.607259   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:03.607305   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:03.622934   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I0708 20:02:03.623345   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:03.623869   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:03.623893   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:03.624294   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:03.624482   30971 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 20:02:03.626344   30971 status.go:330] ha-511021-m02 host status = "Running" (err=<nil>)
	I0708 20:02:03.626358   30971 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:02:03.626732   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:03.626781   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:03.643176   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33397
	I0708 20:02:03.643574   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:03.644077   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:03.644103   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:03.644402   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:03.644603   30971 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 20:02:03.647339   30971 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:02:03.647752   30971 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:02:03.647773   30971 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:02:03.647952   30971 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:02:03.648246   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:03.648286   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:03.663084   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33335
	I0708 20:02:03.663485   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:03.663954   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:03.663978   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:03.664304   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:03.664486   30971 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 20:02:03.664669   30971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:02:03.664689   30971 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 20:02:03.667301   30971 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:02:03.667784   30971 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:02:03.667833   30971 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:02:03.668013   30971 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 20:02:03.668258   30971 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 20:02:03.668428   30971 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 20:02:03.668551   30971 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	W0708 20:02:06.723719   30971 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.216:22: connect: no route to host
	W0708 20:02:06.723795   30971 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	E0708 20:02:06.723809   30971 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:02:06.723819   30971 status.go:257] ha-511021-m02 status: &{Name:ha-511021-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0708 20:02:06.723836   30971 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	I0708 20:02:06.723844   30971 status.go:255] checking status of ha-511021-m03 ...
	I0708 20:02:06.724158   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:06.724196   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:06.738928   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
	I0708 20:02:06.739373   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:06.739862   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:06.739887   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:06.740195   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:06.740373   30971 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 20:02:06.742196   30971 status.go:330] ha-511021-m03 host status = "Running" (err=<nil>)
	I0708 20:02:06.742223   30971 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:02:06.742522   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:06.742562   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:06.758475   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0708 20:02:06.758835   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:06.759296   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:06.759315   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:06.759700   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:06.759920   30971 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 20:02:06.762429   30971 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:02:06.762902   30971 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:02:06.762929   30971 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:02:06.763080   30971 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:02:06.763416   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:06.763465   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:06.779204   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36397
	I0708 20:02:06.779647   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:06.780145   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:06.780170   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:06.780479   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:06.780710   30971 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 20:02:06.780896   30971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:02:06.780924   30971 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 20:02:06.784086   30971 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:02:06.784593   30971 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:02:06.784611   30971 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:02:06.784792   30971 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 20:02:06.785040   30971 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 20:02:06.785204   30971 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 20:02:06.785402   30971 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 20:02:06.875775   30971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:02:06.891572   30971 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:02:06.891596   30971 api_server.go:166] Checking apiserver status ...
	I0708 20:02:06.891623   30971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:02:06.907545   30971 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0708 20:02:06.917962   30971 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:02:06.918013   30971 ssh_runner.go:195] Run: ls
	I0708 20:02:06.922577   30971 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:02:06.926574   30971 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:02:06.926594   30971 status.go:422] ha-511021-m03 apiserver status = Running (err=<nil>)
	I0708 20:02:06.926602   30971 status.go:257] ha-511021-m03 status: &{Name:ha-511021-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:02:06.926615   30971 status.go:255] checking status of ha-511021-m04 ...
	I0708 20:02:06.926898   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:06.926935   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:06.941660   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45545
	I0708 20:02:06.942119   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:06.942628   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:06.942648   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:06.942972   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:06.943138   30971 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:02:06.944555   30971 status.go:330] ha-511021-m04 host status = "Running" (err=<nil>)
	I0708 20:02:06.944572   30971 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:02:06.944866   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:06.944898   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:06.960854   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40003
	I0708 20:02:06.961279   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:06.961747   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:06.961773   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:06.962143   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:06.962314   30971 main.go:141] libmachine: (ha-511021-m04) Calling .GetIP
	I0708 20:02:06.965114   30971 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:02:06.965583   30971 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:02:06.965614   30971 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:02:06.965763   30971 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:02:06.966114   30971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:06.966151   30971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:06.981563   30971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0708 20:02:06.981969   30971 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:06.982412   30971 main.go:141] libmachine: Using API Version  1
	I0708 20:02:06.982433   30971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:06.982811   30971 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:06.983014   30971 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:02:06.983212   30971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:02:06.983233   30971 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:02:06.986079   30971 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:02:06.986509   30971 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:02:06.986529   30971 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:02:06.986664   30971 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:02:06.986840   30971 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:02:06.986972   30971 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:02:06.987095   30971 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	I0708 20:02:07.067389   30971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:02:07.082330   30971 status.go:257] ha-511021-m04 status: &{Name:ha-511021-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0708 20:02:07.689288   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr: exit status 7 (630.991767ms)

                                                
                                                
-- stdout --
	ha-511021
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-511021-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:02:14.224940   31109 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:02:14.225462   31109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:02:14.225474   31109 out.go:304] Setting ErrFile to fd 2...
	I0708 20:02:14.225480   31109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:02:14.225706   31109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:02:14.225914   31109 out.go:298] Setting JSON to false
	I0708 20:02:14.225944   31109 mustload.go:65] Loading cluster: ha-511021
	I0708 20:02:14.226033   31109 notify.go:220] Checking for updates...
	I0708 20:02:14.226346   31109 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:02:14.226363   31109 status.go:255] checking status of ha-511021 ...
	I0708 20:02:14.226727   31109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:14.226824   31109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:14.244389   31109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
	I0708 20:02:14.244847   31109 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:14.245419   31109 main.go:141] libmachine: Using API Version  1
	I0708 20:02:14.245445   31109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:14.245845   31109 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:14.246047   31109 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:02:14.247745   31109 status.go:330] ha-511021 host status = "Running" (err=<nil>)
	I0708 20:02:14.247763   31109 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:02:14.248141   31109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:14.248180   31109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:14.262271   31109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
	I0708 20:02:14.262631   31109 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:14.263090   31109 main.go:141] libmachine: Using API Version  1
	I0708 20:02:14.263107   31109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:14.263396   31109 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:14.263655   31109 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:02:14.266471   31109 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:02:14.266896   31109 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:02:14.266927   31109 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:02:14.267035   31109 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:02:14.267334   31109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:14.267401   31109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:14.281649   31109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0708 20:02:14.282049   31109 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:14.282524   31109 main.go:141] libmachine: Using API Version  1
	I0708 20:02:14.282546   31109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:14.282881   31109 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:14.283084   31109 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:02:14.283274   31109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:02:14.283294   31109 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:02:14.285916   31109 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:02:14.286351   31109 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:02:14.286385   31109 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:02:14.286447   31109 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:02:14.286619   31109 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:02:14.286762   31109 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:02:14.286901   31109 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:02:14.372534   31109 ssh_runner.go:195] Run: systemctl --version
	I0708 20:02:14.378638   31109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:02:14.393585   31109 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:02:14.393615   31109 api_server.go:166] Checking apiserver status ...
	I0708 20:02:14.393654   31109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:02:14.412060   31109 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0708 20:02:14.425093   31109 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:02:14.425174   31109 ssh_runner.go:195] Run: ls
	I0708 20:02:14.430205   31109 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:02:14.437931   31109 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:02:14.437960   31109 status.go:422] ha-511021 apiserver status = Running (err=<nil>)
	I0708 20:02:14.437969   31109 status.go:257] ha-511021 status: &{Name:ha-511021 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:02:14.437984   31109 status.go:255] checking status of ha-511021-m02 ...
	I0708 20:02:14.438267   31109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:14.438310   31109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:14.452896   31109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0708 20:02:14.453297   31109 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:14.453760   31109 main.go:141] libmachine: Using API Version  1
	I0708 20:02:14.453782   31109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:14.454079   31109 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:14.454264   31109 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 20:02:14.456091   31109 status.go:330] ha-511021-m02 host status = "Stopped" (err=<nil>)
	I0708 20:02:14.456107   31109 status.go:343] host is not running, skipping remaining checks
	I0708 20:02:14.456113   31109 status.go:257] ha-511021-m02 status: &{Name:ha-511021-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:02:14.456127   31109 status.go:255] checking status of ha-511021-m03 ...
	I0708 20:02:14.456418   31109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:14.456460   31109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:14.473050   31109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33921
	I0708 20:02:14.473457   31109 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:14.473904   31109 main.go:141] libmachine: Using API Version  1
	I0708 20:02:14.473924   31109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:14.474211   31109 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:14.474389   31109 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 20:02:14.475691   31109 status.go:330] ha-511021-m03 host status = "Running" (err=<nil>)
	I0708 20:02:14.475705   31109 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:02:14.475994   31109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:14.476034   31109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:14.491476   31109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I0708 20:02:14.491833   31109 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:14.492364   31109 main.go:141] libmachine: Using API Version  1
	I0708 20:02:14.492378   31109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:14.492675   31109 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:14.492863   31109 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 20:02:14.495394   31109 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:02:14.495760   31109 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:02:14.495783   31109 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:02:14.495992   31109 host.go:66] Checking if "ha-511021-m03" exists ...
	I0708 20:02:14.496377   31109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:14.496421   31109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:14.510456   31109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0708 20:02:14.510838   31109 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:14.511307   31109 main.go:141] libmachine: Using API Version  1
	I0708 20:02:14.511323   31109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:14.511718   31109 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:14.511934   31109 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 20:02:14.512133   31109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:02:14.512157   31109 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 20:02:14.514669   31109 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:02:14.515075   31109 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:02:14.515094   31109 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:02:14.515270   31109 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 20:02:14.515464   31109 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 20:02:14.515608   31109 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 20:02:14.515804   31109 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 20:02:14.602959   31109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:02:14.620050   31109 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:02:14.620075   31109 api_server.go:166] Checking apiserver status ...
	I0708 20:02:14.620112   31109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:02:14.634775   31109 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0708 20:02:14.646142   31109 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:02:14.646212   31109 ssh_runner.go:195] Run: ls
	I0708 20:02:14.650943   31109 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:02:14.655181   31109 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:02:14.655208   31109 status.go:422] ha-511021-m03 apiserver status = Running (err=<nil>)
	I0708 20:02:14.655241   31109 status.go:257] ha-511021-m03 status: &{Name:ha-511021-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:02:14.655265   31109 status.go:255] checking status of ha-511021-m04 ...
	I0708 20:02:14.655632   31109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:14.655666   31109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:14.670335   31109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0708 20:02:14.670759   31109 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:14.671269   31109 main.go:141] libmachine: Using API Version  1
	I0708 20:02:14.671288   31109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:14.671628   31109 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:14.671824   31109 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:02:14.673521   31109 status.go:330] ha-511021-m04 host status = "Running" (err=<nil>)
	I0708 20:02:14.673536   31109 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:02:14.673840   31109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:14.673896   31109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:14.689299   31109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0708 20:02:14.689736   31109 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:14.690172   31109 main.go:141] libmachine: Using API Version  1
	I0708 20:02:14.690193   31109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:14.690567   31109 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:14.690754   31109 main.go:141] libmachine: (ha-511021-m04) Calling .GetIP
	I0708 20:02:14.693408   31109 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:02:14.693799   31109 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:02:14.693818   31109 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:02:14.693998   31109 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:02:14.694312   31109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:14.694352   31109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:14.709470   31109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35341
	I0708 20:02:14.709904   31109 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:14.710348   31109 main.go:141] libmachine: Using API Version  1
	I0708 20:02:14.710368   31109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:14.710667   31109 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:14.710830   31109 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:02:14.711012   31109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:02:14.711031   31109 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:02:14.713539   31109 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:02:14.713927   31109 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:02:14.713951   31109 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:02:14.714141   31109 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:02:14.714433   31109 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:02:14.714592   31109 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:02:14.714697   31109 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	I0708 20:02:14.796539   31109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:02:14.812172   31109 status.go:257] ha-511021-m04 status: &{Name:ha-511021-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-511021 -n ha-511021
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-511021 logs -n 25: (1.427407108s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021:/home/docker/cp-test_ha-511021-m03_ha-511021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021 sudo cat                                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /home/docker/cp-test_ha-511021-m03_ha-511021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m02:/home/docker/cp-test_ha-511021-m03_ha-511021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m02 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /home/docker/cp-test_ha-511021-m03_ha-511021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m04:/home/docker/cp-test_ha-511021-m03_ha-511021-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m04 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /home/docker/cp-test_ha-511021-m03_ha-511021-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp testdata/cp-test.txt                                                | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3985602198/001/cp-test_ha-511021-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021:/home/docker/cp-test_ha-511021-m04_ha-511021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021 sudo cat                                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m02:/home/docker/cp-test_ha-511021-m04_ha-511021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m02 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m03:/home/docker/cp-test_ha-511021-m04_ha-511021-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m03 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-511021 node stop m02 -v=7                                                     | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-511021 node start m02 -v=7                                                    | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 19:54:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 19:54:39.652390   25689 out.go:291] Setting OutFile to fd 1 ...
	I0708 19:54:39.652659   25689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:54:39.652671   25689 out.go:304] Setting ErrFile to fd 2...
	I0708 19:54:39.652677   25689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:54:39.652870   25689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 19:54:39.653519   25689 out.go:298] Setting JSON to false
	I0708 19:54:39.654338   25689 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2229,"bootTime":1720466251,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 19:54:39.654396   25689 start.go:139] virtualization: kvm guest
	I0708 19:54:39.656698   25689 out.go:177] * [ha-511021] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 19:54:39.657932   25689 notify.go:220] Checking for updates...
	I0708 19:54:39.657980   25689 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 19:54:39.659140   25689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 19:54:39.660520   25689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:54:39.661710   25689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:54:39.662958   25689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 19:54:39.664711   25689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 19:54:39.666004   25689 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 19:54:39.701610   25689 out.go:177] * Using the kvm2 driver based on user configuration
	I0708 19:54:39.702810   25689 start.go:297] selected driver: kvm2
	I0708 19:54:39.702827   25689 start.go:901] validating driver "kvm2" against <nil>
	I0708 19:54:39.702840   25689 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 19:54:39.703890   25689 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 19:54:39.703985   25689 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 19:54:39.718945   25689 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 19:54:39.718993   25689 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 19:54:39.719197   25689 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 19:54:39.719266   25689 cni.go:84] Creating CNI manager for ""
	I0708 19:54:39.719279   25689 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0708 19:54:39.719291   25689 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0708 19:54:39.719341   25689 start.go:340] cluster config:
	{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0708 19:54:39.719431   25689 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 19:54:39.722110   25689 out.go:177] * Starting "ha-511021" primary control-plane node in "ha-511021" cluster
	I0708 19:54:39.723356   25689 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:54:39.723392   25689 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 19:54:39.723400   25689 cache.go:56] Caching tarball of preloaded images
	I0708 19:54:39.723499   25689 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 19:54:39.723511   25689 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 19:54:39.723791   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:54:39.723826   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json: {Name:mk652d8bac760778730093f451bc96812e92f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:54:39.723958   25689 start.go:360] acquireMachinesLock for ha-511021: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 19:54:39.723985   25689 start.go:364] duration metric: took 14.37µs to acquireMachinesLock for "ha-511021"
	I0708 19:54:39.724000   25689 start.go:93] Provisioning new machine with config: &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:54:39.724058   25689 start.go:125] createHost starting for "" (driver="kvm2")
	I0708 19:54:39.725645   25689 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 19:54:39.725765   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:54:39.725808   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:54:39.740444   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40161
	I0708 19:54:39.740875   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:54:39.741475   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:54:39.741494   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:54:39.741786   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:54:39.741961   25689 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 19:54:39.742100   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:54:39.742233   25689 start.go:159] libmachine.API.Create for "ha-511021" (driver="kvm2")
	I0708 19:54:39.742260   25689 client.go:168] LocalClient.Create starting
	I0708 19:54:39.742291   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 19:54:39.742320   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:54:39.742333   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:54:39.742388   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 19:54:39.742411   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:54:39.742424   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:54:39.742441   25689 main.go:141] libmachine: Running pre-create checks...
	I0708 19:54:39.742449   25689 main.go:141] libmachine: (ha-511021) Calling .PreCreateCheck
	I0708 19:54:39.742750   25689 main.go:141] libmachine: (ha-511021) Calling .GetConfigRaw
	I0708 19:54:39.743090   25689 main.go:141] libmachine: Creating machine...
	I0708 19:54:39.743102   25689 main.go:141] libmachine: (ha-511021) Calling .Create
	I0708 19:54:39.743227   25689 main.go:141] libmachine: (ha-511021) Creating KVM machine...
	I0708 19:54:39.744373   25689 main.go:141] libmachine: (ha-511021) DBG | found existing default KVM network
	I0708 19:54:39.745003   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:39.744885   25712 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0708 19:54:39.745055   25689 main.go:141] libmachine: (ha-511021) DBG | created network xml: 
	I0708 19:54:39.745063   25689 main.go:141] libmachine: (ha-511021) DBG | <network>
	I0708 19:54:39.745069   25689 main.go:141] libmachine: (ha-511021) DBG |   <name>mk-ha-511021</name>
	I0708 19:54:39.745078   25689 main.go:141] libmachine: (ha-511021) DBG |   <dns enable='no'/>
	I0708 19:54:39.745089   25689 main.go:141] libmachine: (ha-511021) DBG |   
	I0708 19:54:39.745096   25689 main.go:141] libmachine: (ha-511021) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0708 19:54:39.745102   25689 main.go:141] libmachine: (ha-511021) DBG |     <dhcp>
	I0708 19:54:39.745109   25689 main.go:141] libmachine: (ha-511021) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0708 19:54:39.745114   25689 main.go:141] libmachine: (ha-511021) DBG |     </dhcp>
	I0708 19:54:39.745121   25689 main.go:141] libmachine: (ha-511021) DBG |   </ip>
	I0708 19:54:39.745126   25689 main.go:141] libmachine: (ha-511021) DBG |   
	I0708 19:54:39.745130   25689 main.go:141] libmachine: (ha-511021) DBG | </network>
	I0708 19:54:39.745135   25689 main.go:141] libmachine: (ha-511021) DBG | 
	I0708 19:54:39.750050   25689 main.go:141] libmachine: (ha-511021) DBG | trying to create private KVM network mk-ha-511021 192.168.39.0/24...
	I0708 19:54:39.816264   25689 main.go:141] libmachine: (ha-511021) DBG | private KVM network mk-ha-511021 192.168.39.0/24 created
	I0708 19:54:39.816296   25689 main.go:141] libmachine: (ha-511021) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021 ...
	I0708 19:54:39.816312   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:39.816258   25712 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:54:39.816333   25689 main.go:141] libmachine: (ha-511021) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 19:54:39.816446   25689 main.go:141] libmachine: (ha-511021) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 19:54:40.045141   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:40.045024   25712 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa...
	I0708 19:54:40.177060   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:40.176940   25712 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/ha-511021.rawdisk...
	I0708 19:54:40.177087   25689 main.go:141] libmachine: (ha-511021) DBG | Writing magic tar header
	I0708 19:54:40.177100   25689 main.go:141] libmachine: (ha-511021) DBG | Writing SSH key tar header
	I0708 19:54:40.177107   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:40.177071   25712 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021 ...
	I0708 19:54:40.177185   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021
	I0708 19:54:40.177228   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021 (perms=drwx------)
	I0708 19:54:40.177238   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 19:54:40.177252   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:54:40.177263   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 19:54:40.177274   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 19:54:40.177287   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 19:54:40.177297   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 19:54:40.177304   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 19:54:40.177314   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 19:54:40.177323   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home/jenkins
	I0708 19:54:40.177342   25689 main.go:141] libmachine: (ha-511021) DBG | Checking permissions on dir: /home
	I0708 19:54:40.177357   25689 main.go:141] libmachine: (ha-511021) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 19:54:40.177366   25689 main.go:141] libmachine: (ha-511021) DBG | Skipping /home - not owner
	I0708 19:54:40.177374   25689 main.go:141] libmachine: (ha-511021) Creating domain...
	I0708 19:54:40.178547   25689 main.go:141] libmachine: (ha-511021) define libvirt domain using xml: 
	I0708 19:54:40.178572   25689 main.go:141] libmachine: (ha-511021) <domain type='kvm'>
	I0708 19:54:40.178595   25689 main.go:141] libmachine: (ha-511021)   <name>ha-511021</name>
	I0708 19:54:40.178609   25689 main.go:141] libmachine: (ha-511021)   <memory unit='MiB'>2200</memory>
	I0708 19:54:40.178618   25689 main.go:141] libmachine: (ha-511021)   <vcpu>2</vcpu>
	I0708 19:54:40.178627   25689 main.go:141] libmachine: (ha-511021)   <features>
	I0708 19:54:40.178634   25689 main.go:141] libmachine: (ha-511021)     <acpi/>
	I0708 19:54:40.178638   25689 main.go:141] libmachine: (ha-511021)     <apic/>
	I0708 19:54:40.178643   25689 main.go:141] libmachine: (ha-511021)     <pae/>
	I0708 19:54:40.178654   25689 main.go:141] libmachine: (ha-511021)     
	I0708 19:54:40.178658   25689 main.go:141] libmachine: (ha-511021)   </features>
	I0708 19:54:40.178663   25689 main.go:141] libmachine: (ha-511021)   <cpu mode='host-passthrough'>
	I0708 19:54:40.178670   25689 main.go:141] libmachine: (ha-511021)   
	I0708 19:54:40.178674   25689 main.go:141] libmachine: (ha-511021)   </cpu>
	I0708 19:54:40.178679   25689 main.go:141] libmachine: (ha-511021)   <os>
	I0708 19:54:40.178684   25689 main.go:141] libmachine: (ha-511021)     <type>hvm</type>
	I0708 19:54:40.178689   25689 main.go:141] libmachine: (ha-511021)     <boot dev='cdrom'/>
	I0708 19:54:40.178693   25689 main.go:141] libmachine: (ha-511021)     <boot dev='hd'/>
	I0708 19:54:40.178741   25689 main.go:141] libmachine: (ha-511021)     <bootmenu enable='no'/>
	I0708 19:54:40.178763   25689 main.go:141] libmachine: (ha-511021)   </os>
	I0708 19:54:40.178774   25689 main.go:141] libmachine: (ha-511021)   <devices>
	I0708 19:54:40.178788   25689 main.go:141] libmachine: (ha-511021)     <disk type='file' device='cdrom'>
	I0708 19:54:40.178805   25689 main.go:141] libmachine: (ha-511021)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/boot2docker.iso'/>
	I0708 19:54:40.178816   25689 main.go:141] libmachine: (ha-511021)       <target dev='hdc' bus='scsi'/>
	I0708 19:54:40.178822   25689 main.go:141] libmachine: (ha-511021)       <readonly/>
	I0708 19:54:40.178829   25689 main.go:141] libmachine: (ha-511021)     </disk>
	I0708 19:54:40.178835   25689 main.go:141] libmachine: (ha-511021)     <disk type='file' device='disk'>
	I0708 19:54:40.178844   25689 main.go:141] libmachine: (ha-511021)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 19:54:40.178856   25689 main.go:141] libmachine: (ha-511021)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/ha-511021.rawdisk'/>
	I0708 19:54:40.178871   25689 main.go:141] libmachine: (ha-511021)       <target dev='hda' bus='virtio'/>
	I0708 19:54:40.178883   25689 main.go:141] libmachine: (ha-511021)     </disk>
	I0708 19:54:40.178892   25689 main.go:141] libmachine: (ha-511021)     <interface type='network'>
	I0708 19:54:40.178901   25689 main.go:141] libmachine: (ha-511021)       <source network='mk-ha-511021'/>
	I0708 19:54:40.178911   25689 main.go:141] libmachine: (ha-511021)       <model type='virtio'/>
	I0708 19:54:40.178918   25689 main.go:141] libmachine: (ha-511021)     </interface>
	I0708 19:54:40.178927   25689 main.go:141] libmachine: (ha-511021)     <interface type='network'>
	I0708 19:54:40.178944   25689 main.go:141] libmachine: (ha-511021)       <source network='default'/>
	I0708 19:54:40.178958   25689 main.go:141] libmachine: (ha-511021)       <model type='virtio'/>
	I0708 19:54:40.178971   25689 main.go:141] libmachine: (ha-511021)     </interface>
	I0708 19:54:40.178981   25689 main.go:141] libmachine: (ha-511021)     <serial type='pty'>
	I0708 19:54:40.178990   25689 main.go:141] libmachine: (ha-511021)       <target port='0'/>
	I0708 19:54:40.178997   25689 main.go:141] libmachine: (ha-511021)     </serial>
	I0708 19:54:40.179009   25689 main.go:141] libmachine: (ha-511021)     <console type='pty'>
	I0708 19:54:40.179019   25689 main.go:141] libmachine: (ha-511021)       <target type='serial' port='0'/>
	I0708 19:54:40.179039   25689 main.go:141] libmachine: (ha-511021)     </console>
	I0708 19:54:40.179058   25689 main.go:141] libmachine: (ha-511021)     <rng model='virtio'>
	I0708 19:54:40.179069   25689 main.go:141] libmachine: (ha-511021)       <backend model='random'>/dev/random</backend>
	I0708 19:54:40.179076   25689 main.go:141] libmachine: (ha-511021)     </rng>
	I0708 19:54:40.179096   25689 main.go:141] libmachine: (ha-511021)     
	I0708 19:54:40.179104   25689 main.go:141] libmachine: (ha-511021)     
	I0708 19:54:40.179109   25689 main.go:141] libmachine: (ha-511021)   </devices>
	I0708 19:54:40.179112   25689 main.go:141] libmachine: (ha-511021) </domain>
	I0708 19:54:40.179120   25689 main.go:141] libmachine: (ha-511021) 
	I0708 19:54:40.183577   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:68:53:2b in network default
	I0708 19:54:40.184048   25689 main.go:141] libmachine: (ha-511021) Ensuring networks are active...
	I0708 19:54:40.184062   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:40.184725   25689 main.go:141] libmachine: (ha-511021) Ensuring network default is active
	I0708 19:54:40.184920   25689 main.go:141] libmachine: (ha-511021) Ensuring network mk-ha-511021 is active
	I0708 19:54:40.185353   25689 main.go:141] libmachine: (ha-511021) Getting domain xml...
	I0708 19:54:40.185973   25689 main.go:141] libmachine: (ha-511021) Creating domain...
	I0708 19:54:41.366987   25689 main.go:141] libmachine: (ha-511021) Waiting to get IP...
	I0708 19:54:41.367752   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:41.368118   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:41.368146   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:41.368093   25712 retry.go:31] will retry after 263.500393ms: waiting for machine to come up
	I0708 19:54:41.635094   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:41.635654   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:41.635684   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:41.635587   25712 retry.go:31] will retry after 349.843209ms: waiting for machine to come up
	I0708 19:54:41.987220   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:41.987653   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:41.987679   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:41.987609   25712 retry.go:31] will retry after 367.765084ms: waiting for machine to come up
	I0708 19:54:42.357171   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:42.357540   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:42.357566   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:42.357495   25712 retry.go:31] will retry after 460.024411ms: waiting for machine to come up
	I0708 19:54:42.819139   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:42.819478   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:42.819502   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:42.819417   25712 retry.go:31] will retry after 747.974264ms: waiting for machine to come up
	I0708 19:54:43.569274   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:43.569664   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:43.569688   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:43.569626   25712 retry.go:31] will retry after 651.085668ms: waiting for machine to come up
	I0708 19:54:44.222296   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:44.222750   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:44.222777   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:44.222704   25712 retry.go:31] will retry after 959.305664ms: waiting for machine to come up
	I0708 19:54:45.183309   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:45.183677   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:45.183706   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:45.183669   25712 retry.go:31] will retry after 1.142334131s: waiting for machine to come up
	I0708 19:54:46.327888   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:46.328221   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:46.328241   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:46.328175   25712 retry.go:31] will retry after 1.319661086s: waiting for machine to come up
	I0708 19:54:47.649728   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:47.650122   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:47.650141   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:47.650084   25712 retry.go:31] will retry after 1.664166267s: waiting for machine to come up
	I0708 19:54:49.315484   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:49.315912   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:49.315946   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:49.315857   25712 retry.go:31] will retry after 2.828162199s: waiting for machine to come up
	I0708 19:54:52.146523   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:52.146907   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:52.146941   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:52.146890   25712 retry.go:31] will retry after 3.36474102s: waiting for machine to come up
	I0708 19:54:55.512873   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:55.513261   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find current IP address of domain ha-511021 in network mk-ha-511021
	I0708 19:54:55.513283   25689 main.go:141] libmachine: (ha-511021) DBG | I0708 19:54:55.513208   25712 retry.go:31] will retry after 3.879896256s: waiting for machine to come up
	I0708 19:54:59.397113   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.397526   25689 main.go:141] libmachine: (ha-511021) Found IP for machine: 192.168.39.33
	I0708 19:54:59.397555   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has current primary IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.397564   25689 main.go:141] libmachine: (ha-511021) Reserving static IP address...
	I0708 19:54:59.397902   25689 main.go:141] libmachine: (ha-511021) DBG | unable to find host DHCP lease matching {name: "ha-511021", mac: "52:54:00:fe:1e:ad", ip: "192.168.39.33"} in network mk-ha-511021
	I0708 19:54:59.470686   25689 main.go:141] libmachine: (ha-511021) DBG | Getting to WaitForSSH function...
	I0708 19:54:59.470713   25689 main.go:141] libmachine: (ha-511021) Reserved static IP address: 192.168.39.33
	I0708 19:54:59.470736   25689 main.go:141] libmachine: (ha-511021) Waiting for SSH to be available...
	I0708 19:54:59.473464   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.473834   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:54:59.473868   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.473954   25689 main.go:141] libmachine: (ha-511021) DBG | Using SSH client type: external
	I0708 19:54:59.473992   25689 main.go:141] libmachine: (ha-511021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa (-rw-------)
	I0708 19:54:59.474032   25689 main.go:141] libmachine: (ha-511021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 19:54:59.474054   25689 main.go:141] libmachine: (ha-511021) DBG | About to run SSH command:
	I0708 19:54:59.474067   25689 main.go:141] libmachine: (ha-511021) DBG | exit 0
	I0708 19:54:59.600057   25689 main.go:141] libmachine: (ha-511021) DBG | SSH cmd err, output: <nil>: 
	I0708 19:54:59.600395   25689 main.go:141] libmachine: (ha-511021) KVM machine creation complete!
	I0708 19:54:59.600702   25689 main.go:141] libmachine: (ha-511021) Calling .GetConfigRaw
	I0708 19:54:59.601244   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:54:59.601479   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:54:59.601690   25689 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 19:54:59.601718   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:54:59.603230   25689 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 19:54:59.603244   25689 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 19:54:59.603249   25689 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 19:54:59.603255   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:54:59.605670   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.606032   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:54:59.606067   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.606237   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:54:59.606446   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.606666   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.606834   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:54:59.606990   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:54:59.607203   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:54:59.607218   25689 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 19:54:59.714903   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:54:59.714921   25689 main.go:141] libmachine: Detecting the provisioner...
	I0708 19:54:59.714929   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:54:59.717832   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.718207   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:54:59.718249   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.718390   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:54:59.718593   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.718742   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.718844   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:54:59.718988   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:54:59.719185   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:54:59.719200   25689 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 19:54:59.828555   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 19:54:59.828612   25689 main.go:141] libmachine: found compatible host: buildroot
	I0708 19:54:59.828619   25689 main.go:141] libmachine: Provisioning with buildroot...
	I0708 19:54:59.828626   25689 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 19:54:59.828872   25689 buildroot.go:166] provisioning hostname "ha-511021"
	I0708 19:54:59.828891   25689 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 19:54:59.829084   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:54:59.831701   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.832072   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:54:59.832098   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.832244   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:54:59.832565   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.832721   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.832857   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:54:59.833015   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:54:59.833200   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:54:59.833215   25689 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-511021 && echo "ha-511021" | sudo tee /etc/hostname
	I0708 19:54:59.954208   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-511021
	
	I0708 19:54:59.954240   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:54:59.957219   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.957536   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:54:59.957566   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:54:59.957747   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:54:59.957959   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.958145   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:54:59.958310   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:54:59.958455   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:54:59.958649   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:54:59.958672   25689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-511021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-511021/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-511021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 19:55:00.073351   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:55:00.073377   25689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 19:55:00.073414   25689 buildroot.go:174] setting up certificates
	I0708 19:55:00.073439   25689 provision.go:84] configureAuth start
	I0708 19:55:00.073451   25689 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 19:55:00.073731   25689 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 19:55:00.076659   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.077115   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.077139   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.077391   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.079629   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.080022   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.080068   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.080210   25689 provision.go:143] copyHostCerts
	I0708 19:55:00.080241   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:55:00.080299   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 19:55:00.080310   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:55:00.080377   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 19:55:00.080452   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:55:00.080474   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 19:55:00.080481   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:55:00.080504   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 19:55:00.080547   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:55:00.080562   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 19:55:00.080568   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:55:00.080587   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 19:55:00.080635   25689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.ha-511021 san=[127.0.0.1 192.168.39.33 ha-511021 localhost minikube]
	I0708 19:55:00.264734   25689 provision.go:177] copyRemoteCerts
	I0708 19:55:00.264785   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 19:55:00.264806   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.267804   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.268185   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.268214   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.268450   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.268651   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.268828   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.268965   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:00.354041   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 19:55:00.354113   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 19:55:00.380126   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 19:55:00.380202   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0708 19:55:00.406408   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 19:55:00.406474   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 19:55:00.434875   25689 provision.go:87] duration metric: took 361.421634ms to configureAuth
	I0708 19:55:00.434902   25689 buildroot.go:189] setting minikube options for container-runtime
	I0708 19:55:00.435106   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:55:00.435203   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.437630   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.437884   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.437909   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.438066   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.438261   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.438445   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.438605   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.438746   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:00.438926   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:55:00.438949   25689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 19:55:00.709587   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 19:55:00.709617   25689 main.go:141] libmachine: Checking connection to Docker...
	I0708 19:55:00.709625   25689 main.go:141] libmachine: (ha-511021) Calling .GetURL
	I0708 19:55:00.710958   25689 main.go:141] libmachine: (ha-511021) DBG | Using libvirt version 6000000
	I0708 19:55:00.712974   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.713254   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.713274   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.713456   25689 main.go:141] libmachine: Docker is up and running!
	I0708 19:55:00.713469   25689 main.go:141] libmachine: Reticulating splines...
	I0708 19:55:00.713477   25689 client.go:171] duration metric: took 20.97120701s to LocalClient.Create
	I0708 19:55:00.713502   25689 start.go:167] duration metric: took 20.971270107s to libmachine.API.Create "ha-511021"
	I0708 19:55:00.713514   25689 start.go:293] postStartSetup for "ha-511021" (driver="kvm2")
	I0708 19:55:00.713526   25689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 19:55:00.713558   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:00.713770   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 19:55:00.713790   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.715882   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.716236   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.716255   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.716435   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.716616   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.716806   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.716940   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:00.802288   25689 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 19:55:00.806405   25689 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 19:55:00.806428   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 19:55:00.806492   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 19:55:00.806594   25689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 19:55:00.806607   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /etc/ssl/certs/131412.pem
	I0708 19:55:00.806723   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 19:55:00.816350   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:55:00.841111   25689 start.go:296] duration metric: took 127.584278ms for postStartSetup
	I0708 19:55:00.841154   25689 main.go:141] libmachine: (ha-511021) Calling .GetConfigRaw
	I0708 19:55:00.841827   25689 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 19:55:00.844230   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.844540   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.844567   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.844773   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:55:00.844938   25689 start.go:128] duration metric: took 21.120872101s to createHost
	I0708 19:55:00.844959   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.846861   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.847129   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.847154   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.847287   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.847492   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.847648   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.847780   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.847916   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:00.848078   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 19:55:00.848087   25689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 19:55:00.956499   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720468500.928643392
	
	I0708 19:55:00.956531   25689 fix.go:216] guest clock: 1720468500.928643392
	I0708 19:55:00.956539   25689 fix.go:229] Guest: 2024-07-08 19:55:00.928643392 +0000 UTC Remote: 2024-07-08 19:55:00.844949642 +0000 UTC m=+21.230644795 (delta=83.69375ms)
	I0708 19:55:00.956574   25689 fix.go:200] guest clock delta is within tolerance: 83.69375ms
	I0708 19:55:00.956587   25689 start.go:83] releasing machines lock for "ha-511021", held for 21.232586521s
	I0708 19:55:00.956608   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:00.956859   25689 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 19:55:00.959369   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.959802   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.959831   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.959990   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:00.960466   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:00.960617   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:00.960673   25689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 19:55:00.960713   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.960823   25689 ssh_runner.go:195] Run: cat /version.json
	I0708 19:55:00.960846   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:00.963523   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.963751   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.963849   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.963877   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.964000   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.964148   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:00.964168   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.964226   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:00.964347   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.964356   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:00.964476   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:00.964502   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:00.964624   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:00.964748   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:01.041150   25689 ssh_runner.go:195] Run: systemctl --version
	I0708 19:55:01.065415   25689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 19:55:01.226264   25689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 19:55:01.233290   25689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 19:55:01.233360   25689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 19:55:01.250592   25689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 19:55:01.250619   25689 start.go:494] detecting cgroup driver to use...
	I0708 19:55:01.250704   25689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 19:55:01.270164   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 19:55:01.285178   25689 docker.go:217] disabling cri-docker service (if available) ...
	I0708 19:55:01.285251   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 19:55:01.299973   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 19:55:01.314671   25689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 19:55:01.429194   25689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 19:55:01.598542   25689 docker.go:233] disabling docker service ...
	I0708 19:55:01.598602   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 19:55:01.614109   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 19:55:01.627759   25689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 19:55:01.769835   25689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 19:55:01.899695   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 19:55:01.914521   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 19:55:01.934549   25689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 19:55:01.934617   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:01.946357   25689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 19:55:01.946430   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:01.958494   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:01.971068   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:01.983335   25689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 19:55:01.995240   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:02.006738   25689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:02.024510   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:02.036171   25689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 19:55:02.046861   25689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 19:55:02.046946   25689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 19:55:02.062314   25689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 19:55:02.073111   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:55:02.189279   25689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 19:55:02.323848   25689 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 19:55:02.323929   25689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 19:55:02.328847   25689 start.go:562] Will wait 60s for crictl version
	I0708 19:55:02.328911   25689 ssh_runner.go:195] Run: which crictl
	I0708 19:55:02.332927   25689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 19:55:02.377418   25689 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 19:55:02.377489   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:55:02.406746   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:55:02.439026   25689 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 19:55:02.440298   25689 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 19:55:02.442945   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:02.443243   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:02.443267   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:02.443553   25689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 19:55:02.448030   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:55:02.461988   25689 kubeadm.go:877] updating cluster {Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 19:55:02.462085   25689 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:55:02.462131   25689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 19:55:02.497382   25689 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 19:55:02.497456   25689 ssh_runner.go:195] Run: which lz4
	I0708 19:55:02.501498   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0708 19:55:02.501585   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 19:55:02.506177   25689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 19:55:02.506207   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 19:55:03.961052   25689 crio.go:462] duration metric: took 1.459490708s to copy over tarball
	I0708 19:55:03.961131   25689 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 19:55:06.114665   25689 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.15350267s)
	I0708 19:55:06.114696   25689 crio.go:469] duration metric: took 2.153618785s to extract the tarball
	I0708 19:55:06.114703   25689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 19:55:06.153351   25689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 19:55:06.202758   25689 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 19:55:06.202780   25689 cache_images.go:84] Images are preloaded, skipping loading
	I0708 19:55:06.202789   25689 kubeadm.go:928] updating node { 192.168.39.33 8443 v1.30.2 crio true true} ...
	I0708 19:55:06.202902   25689 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-511021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 19:55:06.202965   25689 ssh_runner.go:195] Run: crio config
	I0708 19:55:06.250069   25689 cni.go:84] Creating CNI manager for ""
	I0708 19:55:06.250085   25689 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 19:55:06.250093   25689 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 19:55:06.250111   25689 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.33 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-511021 NodeName:ha-511021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 19:55:06.250280   25689 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-511021"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 19:55:06.250303   25689 kube-vip.go:115] generating kube-vip config ...
	I0708 19:55:06.250349   25689 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0708 19:55:06.269168   25689 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0708 19:55:06.269284   25689 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0708 19:55:06.269345   25689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 19:55:06.279384   25689 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 19:55:06.279475   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0708 19:55:06.289203   25689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0708 19:55:06.306698   25689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 19:55:06.324526   25689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0708 19:55:06.341335   25689 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0708 19:55:06.358722   25689 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0708 19:55:06.362943   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:55:06.376102   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:55:06.492892   25689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:55:06.510981   25689 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021 for IP: 192.168.39.33
	I0708 19:55:06.511007   25689 certs.go:194] generating shared ca certs ...
	I0708 19:55:06.511022   25689 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:06.511192   25689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 19:55:06.511248   25689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 19:55:06.511263   25689 certs.go:256] generating profile certs ...
	I0708 19:55:06.511331   25689 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key
	I0708 19:55:06.511355   25689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt with IP's: []
	I0708 19:55:06.695699   25689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt ...
	I0708 19:55:06.695728   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt: {Name:mke97764dd135ab9d0e1fc55099f96d1b806e54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:06.695921   25689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key ...
	I0708 19:55:06.695936   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key: {Name:mk53c15aa980b0692c0d4c2e27e159704091483b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:06.696035   25689 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.655f3ec0
	I0708 19:55:06.696051   25689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.655f3ec0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.33 192.168.39.254]
	I0708 19:55:06.853818   25689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.655f3ec0 ...
	I0708 19:55:06.853848   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.655f3ec0: {Name:mke1c560140d2b33b7839a6aaf663f5c37079bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:06.854036   25689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.655f3ec0 ...
	I0708 19:55:06.854052   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.655f3ec0: {Name:mkf58a0bcd6873684c72bf33352fced7876fdfac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:06.854146   25689 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.655f3ec0 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt
	I0708 19:55:06.854241   25689 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.655f3ec0 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key
	I0708 19:55:06.854301   25689 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key
	I0708 19:55:06.854316   25689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt with IP's: []
	I0708 19:55:07.356523   25689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt ...
	I0708 19:55:07.356553   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt: {Name:mk88bac3c3c9852133ee72c0b6f05a2a984c8dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:07.356710   25689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key ...
	I0708 19:55:07.356721   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key: {Name:mkd63a74860318e3b37978b8c4c8682a51f4eea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:07.356785   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 19:55:07.356802   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 19:55:07.356812   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 19:55:07.356825   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 19:55:07.356837   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 19:55:07.356849   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 19:55:07.356862   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 19:55:07.356873   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 19:55:07.356923   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 19:55:07.356956   25689 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 19:55:07.356965   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 19:55:07.356986   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 19:55:07.357008   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 19:55:07.357028   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 19:55:07.357062   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:55:07.357088   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:07.357102   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem -> /usr/share/ca-certificates/13141.pem
	I0708 19:55:07.357116   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /usr/share/ca-certificates/131412.pem
	I0708 19:55:07.357613   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 19:55:07.394108   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 19:55:07.423235   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 19:55:07.450103   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 19:55:07.480148   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 19:55:07.508796   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 19:55:07.533501   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 19:55:07.559283   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 19:55:07.584004   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 19:55:07.608704   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 19:55:07.635038   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 19:55:07.660483   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 19:55:07.678814   25689 ssh_runner.go:195] Run: openssl version
	I0708 19:55:07.685231   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 19:55:07.697337   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 19:55:07.702100   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 19:55:07.702171   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 19:55:07.708383   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 19:55:07.720214   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 19:55:07.732125   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 19:55:07.736877   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 19:55:07.736930   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 19:55:07.742747   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 19:55:07.754534   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 19:55:07.766389   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:07.771122   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:07.771183   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:07.777309   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 19:55:07.789376   25689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 19:55:07.793896   25689 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 19:55:07.793952   25689 kubeadm.go:391] StartCluster: {Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:55:07.794053   25689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 19:55:07.794108   25689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 19:55:07.837758   25689 cri.go:89] found id: ""
	I0708 19:55:07.837819   25689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 19:55:07.848649   25689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 19:55:07.859432   25689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 19:55:07.871520   25689 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 19:55:07.871543   25689 kubeadm.go:156] found existing configuration files:
	
	I0708 19:55:07.871588   25689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 19:55:07.882400   25689 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 19:55:07.882463   25689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 19:55:07.893059   25689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 19:55:07.903472   25689 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 19:55:07.903534   25689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 19:55:07.914821   25689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 19:55:07.925118   25689 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 19:55:07.925171   25689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 19:55:07.935632   25689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 19:55:07.945862   25689 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 19:55:07.945919   25689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 19:55:07.957001   25689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 19:55:08.064510   25689 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 19:55:08.064591   25689 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 19:55:08.220059   25689 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 19:55:08.220151   25689 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 19:55:08.220237   25689 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 19:55:08.424551   25689 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 19:55:08.449925   25689 out.go:204]   - Generating certificates and keys ...
	I0708 19:55:08.450055   25689 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 19:55:08.450141   25689 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 19:55:08.613766   25689 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 19:55:08.811113   25689 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 19:55:08.979231   25689 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 19:55:09.093594   25689 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 19:55:09.323369   25689 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 19:55:09.323626   25689 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-511021 localhost] and IPs [192.168.39.33 127.0.0.1 ::1]
	I0708 19:55:09.668270   25689 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 19:55:09.668543   25689 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-511021 localhost] and IPs [192.168.39.33 127.0.0.1 ::1]
	I0708 19:55:09.737094   25689 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 19:55:09.938904   25689 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 19:55:10.056296   25689 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 19:55:10.056385   25689 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 19:55:10.229973   25689 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 19:55:10.438458   25689 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 19:55:10.585166   25689 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 19:55:10.735716   25689 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 19:55:10.888057   25689 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 19:55:10.889454   25689 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 19:55:10.893265   25689 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 19:55:10.895315   25689 out.go:204]   - Booting up control plane ...
	I0708 19:55:10.895422   25689 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 19:55:10.895544   25689 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 19:55:10.895631   25689 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 19:55:10.912031   25689 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 19:55:10.913120   25689 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 19:55:10.913185   25689 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 19:55:11.057049   25689 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 19:55:11.057160   25689 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 19:55:11.555851   25689 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.246575ms
	I0708 19:55:11.555972   25689 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 19:55:18.055410   25689 kubeadm.go:309] [api-check] The API server is healthy after 6.503225834s
	I0708 19:55:18.076650   25689 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 19:55:18.100189   25689 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 19:55:18.132982   25689 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 19:55:18.133239   25689 kubeadm.go:309] [mark-control-plane] Marking the node ha-511021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 19:55:18.149997   25689 kubeadm.go:309] [bootstrap-token] Using token: fnvqsi.ql5n6lfkoy8q2zw7
	I0708 19:55:18.151537   25689 out.go:204]   - Configuring RBAC rules ...
	I0708 19:55:18.151630   25689 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 19:55:18.156761   25689 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 19:55:18.169164   25689 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 19:55:18.176351   25689 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 19:55:18.180247   25689 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 19:55:18.183990   25689 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 19:55:18.465664   25689 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 19:55:18.913327   25689 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 19:55:19.465637   25689 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 19:55:19.466620   25689 kubeadm.go:309] 
	I0708 19:55:19.466682   25689 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 19:55:19.466687   25689 kubeadm.go:309] 
	I0708 19:55:19.466754   25689 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 19:55:19.466761   25689 kubeadm.go:309] 
	I0708 19:55:19.466787   25689 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 19:55:19.466856   25689 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 19:55:19.466916   25689 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 19:55:19.466924   25689 kubeadm.go:309] 
	I0708 19:55:19.467008   25689 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 19:55:19.467041   25689 kubeadm.go:309] 
	I0708 19:55:19.467116   25689 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 19:55:19.467135   25689 kubeadm.go:309] 
	I0708 19:55:19.467211   25689 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 19:55:19.467272   25689 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 19:55:19.467332   25689 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 19:55:19.467338   25689 kubeadm.go:309] 
	I0708 19:55:19.467424   25689 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 19:55:19.467505   25689 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 19:55:19.467517   25689 kubeadm.go:309] 
	I0708 19:55:19.467633   25689 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fnvqsi.ql5n6lfkoy8q2zw7 \
	I0708 19:55:19.467774   25689 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 19:55:19.467802   25689 kubeadm.go:309] 	--control-plane 
	I0708 19:55:19.467813   25689 kubeadm.go:309] 
	I0708 19:55:19.467935   25689 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 19:55:19.467947   25689 kubeadm.go:309] 
	I0708 19:55:19.468058   25689 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fnvqsi.ql5n6lfkoy8q2zw7 \
	I0708 19:55:19.468204   25689 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 19:55:19.468641   25689 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 19:55:19.468710   25689 cni.go:84] Creating CNI manager for ""
	I0708 19:55:19.468723   25689 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 19:55:19.470694   25689 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0708 19:55:19.472019   25689 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0708 19:55:19.478275   25689 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0708 19:55:19.478292   25689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0708 19:55:19.503581   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0708 19:55:19.867546   25689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 19:55:19.867644   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:19.867644   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-511021 minikube.k8s.io/updated_at=2024_07_08T19_55_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=ha-511021 minikube.k8s.io/primary=true
	I0708 19:55:19.888879   25689 ops.go:34] apiserver oom_adj: -16
	I0708 19:55:20.082005   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:20.583085   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:21.082145   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:21.582700   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:22.082462   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:22.582940   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:23.082590   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:23.582423   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:24.082781   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:24.582976   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:25.083032   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:25.582674   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:26.083052   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:26.582821   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:27.082293   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:27.583072   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:28.082385   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:28.582339   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:29.082683   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:29.582048   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:30.082107   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:30.582938   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:31.082831   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:31.583050   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 19:55:31.715376   25689 kubeadm.go:1107] duration metric: took 11.847797877s to wait for elevateKubeSystemPrivileges
	W0708 19:55:31.715411   25689 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 19:55:31.715420   25689 kubeadm.go:393] duration metric: took 23.921473775s to StartCluster
	I0708 19:55:31.715439   25689 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:31.715531   25689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:55:31.716201   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:31.716405   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 19:55:31.716415   25689 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:55:31.716431   25689 start.go:240] waiting for startup goroutines ...
	I0708 19:55:31.716440   25689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 19:55:31.716484   25689 addons.go:69] Setting storage-provisioner=true in profile "ha-511021"
	I0708 19:55:31.716504   25689 addons.go:69] Setting default-storageclass=true in profile "ha-511021"
	I0708 19:55:31.716514   25689 addons.go:234] Setting addon storage-provisioner=true in "ha-511021"
	I0708 19:55:31.716535   25689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-511021"
	I0708 19:55:31.716544   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:55:31.716675   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:55:31.716895   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:31.716924   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:31.716951   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:31.716994   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:31.733136   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45125
	I0708 19:55:31.733170   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0708 19:55:31.733675   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:31.733704   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:31.734218   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:31.734243   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:31.734221   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:31.734260   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:31.734559   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:31.734563   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:31.734739   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:55:31.735211   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:31.735251   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:31.737036   25689 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:55:31.737389   25689 kapi.go:59] client config for ha-511021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key", CAFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfdf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 19:55:31.737955   25689 cert_rotation.go:137] Starting client certificate rotation controller
	I0708 19:55:31.738268   25689 addons.go:234] Setting addon default-storageclass=true in "ha-511021"
	I0708 19:55:31.738309   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:55:31.738676   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:31.738706   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:31.751038   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0708 19:55:31.751496   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:31.752011   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:31.752027   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:31.752371   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:31.752578   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:55:31.754405   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:31.754938   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I0708 19:55:31.755281   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:31.755878   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:31.755895   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:31.756190   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:31.756602   25689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 19:55:31.756759   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:31.756810   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:31.758256   25689 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 19:55:31.758272   25689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 19:55:31.758287   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:31.761244   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:31.761607   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:31.761621   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:31.761906   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:31.762098   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:31.762231   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:31.762351   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:31.772637   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36585
	I0708 19:55:31.773059   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:31.773538   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:31.773573   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:31.773981   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:31.774151   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:55:31.775990   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:31.776256   25689 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 19:55:31.776270   25689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 19:55:31.776286   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:31.778844   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:31.779189   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:31.779213   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:31.779430   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:31.779612   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:31.779755   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:31.779968   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:31.877499   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0708 19:55:31.941292   25689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 19:55:31.978237   25689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 19:55:32.400116   25689 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0708 19:55:32.651893   25689 main.go:141] libmachine: Making call to close driver server
	I0708 19:55:32.651915   25689 main.go:141] libmachine: (ha-511021) Calling .Close
	I0708 19:55:32.651946   25689 main.go:141] libmachine: Making call to close driver server
	I0708 19:55:32.651962   25689 main.go:141] libmachine: (ha-511021) Calling .Close
	I0708 19:55:32.652197   25689 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:55:32.652220   25689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:55:32.652230   25689 main.go:141] libmachine: Making call to close driver server
	I0708 19:55:32.652238   25689 main.go:141] libmachine: (ha-511021) Calling .Close
	I0708 19:55:32.652256   25689 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:55:32.652267   25689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:55:32.652308   25689 main.go:141] libmachine: Making call to close driver server
	I0708 19:55:32.652320   25689 main.go:141] libmachine: (ha-511021) Calling .Close
	I0708 19:55:32.652272   25689 main.go:141] libmachine: (ha-511021) DBG | Closing plugin on server side
	I0708 19:55:32.652420   25689 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:55:32.652436   25689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:55:32.652572   25689 main.go:141] libmachine: (ha-511021) DBG | Closing plugin on server side
	I0708 19:55:32.652610   25689 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:55:32.652622   25689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:55:32.652793   25689 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0708 19:55:32.652804   25689 round_trippers.go:469] Request Headers:
	I0708 19:55:32.652815   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:55:32.652821   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:55:32.690663   25689 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0708 19:55:32.691193   25689 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0708 19:55:32.691207   25689 round_trippers.go:469] Request Headers:
	I0708 19:55:32.691214   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:55:32.691218   25689 round_trippers.go:473]     Content-Type: application/json
	I0708 19:55:32.691220   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:55:32.697943   25689 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0708 19:55:32.698112   25689 main.go:141] libmachine: Making call to close driver server
	I0708 19:55:32.698131   25689 main.go:141] libmachine: (ha-511021) Calling .Close
	I0708 19:55:32.698428   25689 main.go:141] libmachine: (ha-511021) DBG | Closing plugin on server side
	I0708 19:55:32.698433   25689 main.go:141] libmachine: Successfully made call to close driver server
	I0708 19:55:32.698448   25689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 19:55:32.700491   25689 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0708 19:55:32.701862   25689 addons.go:510] duration metric: took 985.416008ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0708 19:55:32.701904   25689 start.go:245] waiting for cluster config update ...
	I0708 19:55:32.701920   25689 start.go:254] writing updated cluster config ...
	I0708 19:55:32.703577   25689 out.go:177] 
	I0708 19:55:32.705002   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:55:32.705068   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:55:32.706898   25689 out.go:177] * Starting "ha-511021-m02" control-plane node in "ha-511021" cluster
	I0708 19:55:32.708244   25689 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:55:32.708274   25689 cache.go:56] Caching tarball of preloaded images
	I0708 19:55:32.708364   25689 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 19:55:32.708375   25689 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 19:55:32.708451   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:55:32.708627   25689 start.go:360] acquireMachinesLock for ha-511021-m02: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 19:55:32.708670   25689 start.go:364] duration metric: took 22.327µs to acquireMachinesLock for "ha-511021-m02"
	I0708 19:55:32.708687   25689 start.go:93] Provisioning new machine with config: &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:55:32.708746   25689 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0708 19:55:32.710542   25689 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 19:55:32.710630   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:32.710656   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:32.725807   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I0708 19:55:32.726396   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:32.726875   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:32.726893   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:32.727241   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:32.727494   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetMachineName
	I0708 19:55:32.727674   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:32.727842   25689 start.go:159] libmachine.API.Create for "ha-511021" (driver="kvm2")
	I0708 19:55:32.727867   25689 client.go:168] LocalClient.Create starting
	I0708 19:55:32.727903   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 19:55:32.727944   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:55:32.727967   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:55:32.728033   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 19:55:32.728060   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:55:32.728076   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:55:32.728103   25689 main.go:141] libmachine: Running pre-create checks...
	I0708 19:55:32.728114   25689 main.go:141] libmachine: (ha-511021-m02) Calling .PreCreateCheck
	I0708 19:55:32.728349   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetConfigRaw
	I0708 19:55:32.728846   25689 main.go:141] libmachine: Creating machine...
	I0708 19:55:32.728874   25689 main.go:141] libmachine: (ha-511021-m02) Calling .Create
	I0708 19:55:32.729003   25689 main.go:141] libmachine: (ha-511021-m02) Creating KVM machine...
	I0708 19:55:32.730206   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found existing default KVM network
	I0708 19:55:32.730327   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found existing private KVM network mk-ha-511021
	I0708 19:55:32.730448   25689 main.go:141] libmachine: (ha-511021-m02) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02 ...
	I0708 19:55:32.730467   25689 main.go:141] libmachine: (ha-511021-m02) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 19:55:32.730518   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:32.730429   26079 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:55:32.730617   25689 main.go:141] libmachine: (ha-511021-m02) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 19:55:32.948539   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:32.948390   26079 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa...
	I0708 19:55:33.237905   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:33.237782   26079 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/ha-511021-m02.rawdisk...
	I0708 19:55:33.237935   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Writing magic tar header
	I0708 19:55:33.237946   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Writing SSH key tar header
	I0708 19:55:33.237958   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:33.237893   26079 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02 ...
	I0708 19:55:33.237974   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02
	I0708 19:55:33.238025   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02 (perms=drwx------)
	I0708 19:55:33.238051   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 19:55:33.238064   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 19:55:33.238084   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 19:55:33.238118   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 19:55:33.238167   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 19:55:33.238187   25689 main.go:141] libmachine: (ha-511021-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 19:55:33.238195   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:55:33.238216   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 19:55:33.238235   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 19:55:33.238247   25689 main.go:141] libmachine: (ha-511021-m02) Creating domain...
	I0708 19:55:33.238264   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home/jenkins
	I0708 19:55:33.238277   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Checking permissions on dir: /home
	I0708 19:55:33.238291   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Skipping /home - not owner
	I0708 19:55:33.239070   25689 main.go:141] libmachine: (ha-511021-m02) define libvirt domain using xml: 
	I0708 19:55:33.239090   25689 main.go:141] libmachine: (ha-511021-m02) <domain type='kvm'>
	I0708 19:55:33.239097   25689 main.go:141] libmachine: (ha-511021-m02)   <name>ha-511021-m02</name>
	I0708 19:55:33.239103   25689 main.go:141] libmachine: (ha-511021-m02)   <memory unit='MiB'>2200</memory>
	I0708 19:55:33.239115   25689 main.go:141] libmachine: (ha-511021-m02)   <vcpu>2</vcpu>
	I0708 19:55:33.239125   25689 main.go:141] libmachine: (ha-511021-m02)   <features>
	I0708 19:55:33.239152   25689 main.go:141] libmachine: (ha-511021-m02)     <acpi/>
	I0708 19:55:33.239174   25689 main.go:141] libmachine: (ha-511021-m02)     <apic/>
	I0708 19:55:33.239198   25689 main.go:141] libmachine: (ha-511021-m02)     <pae/>
	I0708 19:55:33.239209   25689 main.go:141] libmachine: (ha-511021-m02)     
	I0708 19:55:33.239220   25689 main.go:141] libmachine: (ha-511021-m02)   </features>
	I0708 19:55:33.239231   25689 main.go:141] libmachine: (ha-511021-m02)   <cpu mode='host-passthrough'>
	I0708 19:55:33.239241   25689 main.go:141] libmachine: (ha-511021-m02)   
	I0708 19:55:33.239251   25689 main.go:141] libmachine: (ha-511021-m02)   </cpu>
	I0708 19:55:33.239261   25689 main.go:141] libmachine: (ha-511021-m02)   <os>
	I0708 19:55:33.239272   25689 main.go:141] libmachine: (ha-511021-m02)     <type>hvm</type>
	I0708 19:55:33.239285   25689 main.go:141] libmachine: (ha-511021-m02)     <boot dev='cdrom'/>
	I0708 19:55:33.239296   25689 main.go:141] libmachine: (ha-511021-m02)     <boot dev='hd'/>
	I0708 19:55:33.239321   25689 main.go:141] libmachine: (ha-511021-m02)     <bootmenu enable='no'/>
	I0708 19:55:33.239343   25689 main.go:141] libmachine: (ha-511021-m02)   </os>
	I0708 19:55:33.239365   25689 main.go:141] libmachine: (ha-511021-m02)   <devices>
	I0708 19:55:33.239384   25689 main.go:141] libmachine: (ha-511021-m02)     <disk type='file' device='cdrom'>
	I0708 19:55:33.239395   25689 main.go:141] libmachine: (ha-511021-m02)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/boot2docker.iso'/>
	I0708 19:55:33.239401   25689 main.go:141] libmachine: (ha-511021-m02)       <target dev='hdc' bus='scsi'/>
	I0708 19:55:33.239407   25689 main.go:141] libmachine: (ha-511021-m02)       <readonly/>
	I0708 19:55:33.239422   25689 main.go:141] libmachine: (ha-511021-m02)     </disk>
	I0708 19:55:33.239430   25689 main.go:141] libmachine: (ha-511021-m02)     <disk type='file' device='disk'>
	I0708 19:55:33.239436   25689 main.go:141] libmachine: (ha-511021-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 19:55:33.239459   25689 main.go:141] libmachine: (ha-511021-m02)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/ha-511021-m02.rawdisk'/>
	I0708 19:55:33.239476   25689 main.go:141] libmachine: (ha-511021-m02)       <target dev='hda' bus='virtio'/>
	I0708 19:55:33.239487   25689 main.go:141] libmachine: (ha-511021-m02)     </disk>
	I0708 19:55:33.239497   25689 main.go:141] libmachine: (ha-511021-m02)     <interface type='network'>
	I0708 19:55:33.239509   25689 main.go:141] libmachine: (ha-511021-m02)       <source network='mk-ha-511021'/>
	I0708 19:55:33.239517   25689 main.go:141] libmachine: (ha-511021-m02)       <model type='virtio'/>
	I0708 19:55:33.239523   25689 main.go:141] libmachine: (ha-511021-m02)     </interface>
	I0708 19:55:33.239531   25689 main.go:141] libmachine: (ha-511021-m02)     <interface type='network'>
	I0708 19:55:33.239539   25689 main.go:141] libmachine: (ha-511021-m02)       <source network='default'/>
	I0708 19:55:33.239544   25689 main.go:141] libmachine: (ha-511021-m02)       <model type='virtio'/>
	I0708 19:55:33.239551   25689 main.go:141] libmachine: (ha-511021-m02)     </interface>
	I0708 19:55:33.239556   25689 main.go:141] libmachine: (ha-511021-m02)     <serial type='pty'>
	I0708 19:55:33.239563   25689 main.go:141] libmachine: (ha-511021-m02)       <target port='0'/>
	I0708 19:55:33.239567   25689 main.go:141] libmachine: (ha-511021-m02)     </serial>
	I0708 19:55:33.239580   25689 main.go:141] libmachine: (ha-511021-m02)     <console type='pty'>
	I0708 19:55:33.239586   25689 main.go:141] libmachine: (ha-511021-m02)       <target type='serial' port='0'/>
	I0708 19:55:33.239611   25689 main.go:141] libmachine: (ha-511021-m02)     </console>
	I0708 19:55:33.239632   25689 main.go:141] libmachine: (ha-511021-m02)     <rng model='virtio'>
	I0708 19:55:33.239646   25689 main.go:141] libmachine: (ha-511021-m02)       <backend model='random'>/dev/random</backend>
	I0708 19:55:33.239656   25689 main.go:141] libmachine: (ha-511021-m02)     </rng>
	I0708 19:55:33.239666   25689 main.go:141] libmachine: (ha-511021-m02)     
	I0708 19:55:33.239679   25689 main.go:141] libmachine: (ha-511021-m02)     
	I0708 19:55:33.239693   25689 main.go:141] libmachine: (ha-511021-m02)   </devices>
	I0708 19:55:33.239706   25689 main.go:141] libmachine: (ha-511021-m02) </domain>
	I0708 19:55:33.239719   25689 main.go:141] libmachine: (ha-511021-m02) 
	I0708 19:55:33.245823   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:bf:22:4c in network default
	I0708 19:55:33.246371   25689 main.go:141] libmachine: (ha-511021-m02) Ensuring networks are active...
	I0708 19:55:33.246404   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:33.247042   25689 main.go:141] libmachine: (ha-511021-m02) Ensuring network default is active
	I0708 19:55:33.247361   25689 main.go:141] libmachine: (ha-511021-m02) Ensuring network mk-ha-511021 is active
	I0708 19:55:33.247774   25689 main.go:141] libmachine: (ha-511021-m02) Getting domain xml...
	I0708 19:55:33.248434   25689 main.go:141] libmachine: (ha-511021-m02) Creating domain...
	I0708 19:55:34.477510   25689 main.go:141] libmachine: (ha-511021-m02) Waiting to get IP...
	I0708 19:55:34.478237   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:34.478640   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:34.478669   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:34.478627   26079 retry.go:31] will retry after 281.543718ms: waiting for machine to come up
	I0708 19:55:34.762270   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:34.762710   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:34.762738   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:34.762654   26079 retry.go:31] will retry after 382.724475ms: waiting for machine to come up
	I0708 19:55:35.147285   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:35.147774   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:35.147804   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:35.147726   26079 retry.go:31] will retry after 448.924672ms: waiting for machine to come up
	I0708 19:55:35.598552   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:35.598959   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:35.598987   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:35.598907   26079 retry.go:31] will retry after 526.749552ms: waiting for machine to come up
	I0708 19:55:36.127207   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:36.127692   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:36.127720   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:36.127664   26079 retry.go:31] will retry after 750.455986ms: waiting for machine to come up
	I0708 19:55:36.879870   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:36.880300   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:36.880341   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:36.880201   26079 retry.go:31] will retry after 665.309052ms: waiting for machine to come up
	I0708 19:55:37.547443   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:37.547843   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:37.547864   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:37.547830   26079 retry.go:31] will retry after 1.158507742s: waiting for machine to come up
	I0708 19:55:38.707853   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:38.708312   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:38.708337   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:38.708275   26079 retry.go:31] will retry after 1.226996776s: waiting for machine to come up
	I0708 19:55:39.937245   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:39.937745   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:39.937766   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:39.937687   26079 retry.go:31] will retry after 1.502146373s: waiting for machine to come up
	I0708 19:55:41.442564   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:41.443048   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:41.443077   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:41.442996   26079 retry.go:31] will retry after 2.11023787s: waiting for machine to come up
	I0708 19:55:43.555301   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:43.555850   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:43.555876   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:43.555807   26079 retry.go:31] will retry after 2.54569276s: waiting for machine to come up
	I0708 19:55:46.102861   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:46.103212   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:46.103238   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:46.103171   26079 retry.go:31] will retry after 3.061209639s: waiting for machine to come up
	I0708 19:55:49.166252   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:49.166583   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find current IP address of domain ha-511021-m02 in network mk-ha-511021
	I0708 19:55:49.166614   25689 main.go:141] libmachine: (ha-511021-m02) DBG | I0708 19:55:49.166576   26079 retry.go:31] will retry after 3.099576885s: waiting for machine to come up
	I0708 19:55:52.268760   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:52.269272   25689 main.go:141] libmachine: (ha-511021-m02) Found IP for machine: 192.168.39.216
	I0708 19:55:52.269291   25689 main.go:141] libmachine: (ha-511021-m02) Reserving static IP address...
	I0708 19:55:52.269306   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has current primary IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:52.269564   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find host DHCP lease matching {name: "ha-511021-m02", mac: "52:54:00:e2:dd:87", ip: "192.168.39.216"} in network mk-ha-511021
	I0708 19:55:52.342989   25689 main.go:141] libmachine: (ha-511021-m02) Reserved static IP address: 192.168.39.216
	I0708 19:55:52.343020   25689 main.go:141] libmachine: (ha-511021-m02) Waiting for SSH to be available...
	I0708 19:55:52.343030   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Getting to WaitForSSH function...
	I0708 19:55:52.345518   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:52.345938   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021
	I0708 19:55:52.345965   25689 main.go:141] libmachine: (ha-511021-m02) DBG | unable to find defined IP address of network mk-ha-511021 interface with MAC address 52:54:00:e2:dd:87
	I0708 19:55:52.346099   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Using SSH client type: external
	I0708 19:55:52.346121   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa (-rw-------)
	I0708 19:55:52.346154   25689 main.go:141] libmachine: (ha-511021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 19:55:52.346176   25689 main.go:141] libmachine: (ha-511021-m02) DBG | About to run SSH command:
	I0708 19:55:52.346194   25689 main.go:141] libmachine: (ha-511021-m02) DBG | exit 0
	I0708 19:55:52.350040   25689 main.go:141] libmachine: (ha-511021-m02) DBG | SSH cmd err, output: exit status 255: 
	I0708 19:55:52.350066   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0708 19:55:52.350074   25689 main.go:141] libmachine: (ha-511021-m02) DBG | command : exit 0
	I0708 19:55:52.350079   25689 main.go:141] libmachine: (ha-511021-m02) DBG | err     : exit status 255
	I0708 19:55:52.350086   25689 main.go:141] libmachine: (ha-511021-m02) DBG | output  : 
	I0708 19:55:55.351658   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Getting to WaitForSSH function...
	I0708 19:55:55.354551   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.355109   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.355138   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.355291   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Using SSH client type: external
	I0708 19:55:55.355315   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa (-rw-------)
	I0708 19:55:55.355345   25689 main.go:141] libmachine: (ha-511021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 19:55:55.355399   25689 main.go:141] libmachine: (ha-511021-m02) DBG | About to run SSH command:
	I0708 19:55:55.355444   25689 main.go:141] libmachine: (ha-511021-m02) DBG | exit 0
	I0708 19:55:55.483834   25689 main.go:141] libmachine: (ha-511021-m02) DBG | SSH cmd err, output: <nil>: 
	I0708 19:55:55.484110   25689 main.go:141] libmachine: (ha-511021-m02) KVM machine creation complete!
	I0708 19:55:55.484422   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetConfigRaw
	I0708 19:55:55.484928   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:55.485123   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:55.485307   25689 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 19:55:55.485321   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 19:55:55.486524   25689 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 19:55:55.486535   25689 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 19:55:55.486550   25689 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 19:55:55.486555   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:55.488949   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.489308   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.489328   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.489479   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:55.489703   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.489856   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.490033   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:55.490204   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:55.490437   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:55.490453   25689 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 19:55:55.602951   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:55:55.603071   25689 main.go:141] libmachine: Detecting the provisioner...
	I0708 19:55:55.603084   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:55.606101   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.606461   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.606490   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.606683   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:55.606878   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.607053   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.607176   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:55.607333   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:55.607533   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:55.607544   25689 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 19:55:55.716596   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 19:55:55.716657   25689 main.go:141] libmachine: found compatible host: buildroot
	I0708 19:55:55.716663   25689 main.go:141] libmachine: Provisioning with buildroot...
	I0708 19:55:55.716670   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetMachineName
	I0708 19:55:55.716915   25689 buildroot.go:166] provisioning hostname "ha-511021-m02"
	I0708 19:55:55.716939   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetMachineName
	I0708 19:55:55.717138   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:55.720201   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.720658   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.720686   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.720844   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:55.721029   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.721216   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.721362   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:55.721511   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:55.721666   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:55.721679   25689 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-511021-m02 && echo "ha-511021-m02" | sudo tee /etc/hostname
	I0708 19:55:55.844716   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-511021-m02
	
	I0708 19:55:55.844746   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:55.847576   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.847887   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.847914   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.848059   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:55.848261   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.848455   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:55.848604   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:55.848797   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:55.848990   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:55.849007   25689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-511021-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-511021-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-511021-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 19:55:55.969354   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:55:55.969382   25689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 19:55:55.969402   25689 buildroot.go:174] setting up certificates
	I0708 19:55:55.969413   25689 provision.go:84] configureAuth start
	I0708 19:55:55.969425   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetMachineName
	I0708 19:55:55.969705   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 19:55:55.972586   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.972945   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.972971   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.973133   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:55.975163   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.975556   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:55.975583   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:55.975725   25689 provision.go:143] copyHostCerts
	I0708 19:55:55.975757   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:55:55.975790   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 19:55:55.975799   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:55:55.975875   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 19:55:55.975962   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:55:55.975991   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 19:55:55.976002   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:55:55.976046   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 19:55:55.976121   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:55:55.976140   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 19:55:55.976148   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:55:55.976179   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 19:55:55.976237   25689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.ha-511021-m02 san=[127.0.0.1 192.168.39.216 ha-511021-m02 localhost minikube]
	I0708 19:55:56.146290   25689 provision.go:177] copyRemoteCerts
	I0708 19:55:56.146342   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 19:55:56.146364   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:56.148906   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.149248   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.149275   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.149468   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.149676   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.149828   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.149959   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	I0708 19:55:56.234581   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 19:55:56.234654   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 19:55:56.261328   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 19:55:56.261397   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0708 19:55:56.286817   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 19:55:56.286879   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 19:55:56.312274   25689 provision.go:87] duration metric: took 342.848931ms to configureAuth
	I0708 19:55:56.312317   25689 buildroot.go:189] setting minikube options for container-runtime
	I0708 19:55:56.312508   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:55:56.312590   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:56.315095   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.315418   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.315466   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.315698   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.315888   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.316056   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.316202   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.316345   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:56.316512   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:56.316533   25689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 19:55:56.591734   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 19:55:56.591765   25689 main.go:141] libmachine: Checking connection to Docker...
	I0708 19:55:56.591775   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetURL
	I0708 19:55:56.592816   25689 main.go:141] libmachine: (ha-511021-m02) DBG | Using libvirt version 6000000
	I0708 19:55:56.595154   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.595493   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.595520   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.595692   25689 main.go:141] libmachine: Docker is up and running!
	I0708 19:55:56.595705   25689 main.go:141] libmachine: Reticulating splines...
	I0708 19:55:56.595712   25689 client.go:171] duration metric: took 23.867837165s to LocalClient.Create
	I0708 19:55:56.595731   25689 start.go:167] duration metric: took 23.867892319s to libmachine.API.Create "ha-511021"
	I0708 19:55:56.595739   25689 start.go:293] postStartSetup for "ha-511021-m02" (driver="kvm2")
	I0708 19:55:56.595748   25689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 19:55:56.595763   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:56.595978   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 19:55:56.595999   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:56.598010   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.598339   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.598354   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.598468   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.598632   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.598764   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.598920   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	I0708 19:55:56.686415   25689 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 19:55:56.690968   25689 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 19:55:56.690991   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 19:55:56.691053   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 19:55:56.691119   25689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 19:55:56.691128   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /etc/ssl/certs/131412.pem
	I0708 19:55:56.691207   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 19:55:56.701758   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:55:56.727119   25689 start.go:296] duration metric: took 131.369772ms for postStartSetup
	I0708 19:55:56.727159   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetConfigRaw
	I0708 19:55:56.727721   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 19:55:56.730082   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.730451   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.730476   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.730688   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:55:56.730871   25689 start.go:128] duration metric: took 24.022115297s to createHost
	I0708 19:55:56.730894   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:56.733156   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.733472   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.733496   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.733647   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.733812   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.733975   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.734096   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.734248   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:55:56.734452   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0708 19:55:56.734468   25689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 19:55:56.844723   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720468556.821051446
	
	I0708 19:55:56.844747   25689 fix.go:216] guest clock: 1720468556.821051446
	I0708 19:55:56.844757   25689 fix.go:229] Guest: 2024-07-08 19:55:56.821051446 +0000 UTC Remote: 2024-07-08 19:55:56.730882592 +0000 UTC m=+77.116577746 (delta=90.168854ms)
	I0708 19:55:56.844777   25689 fix.go:200] guest clock delta is within tolerance: 90.168854ms
	I0708 19:55:56.844784   25689 start.go:83] releasing machines lock for "ha-511021-m02", held for 24.136104006s
	I0708 19:55:56.844807   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:56.845081   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 19:55:56.847788   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.848120   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.848140   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.850423   25689 out.go:177] * Found network options:
	I0708 19:55:56.851805   25689 out.go:177]   - NO_PROXY=192.168.39.33
	W0708 19:55:56.853006   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	I0708 19:55:56.853031   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:56.853591   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:56.853768   25689 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 19:55:56.853858   25689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 19:55:56.853897   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	W0708 19:55:56.853961   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	I0708 19:55:56.854032   25689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 19:55:56.854054   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 19:55:56.856550   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.856730   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.856903   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.856930   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.857098   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.857104   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:56.857126   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:56.857311   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 19:55:56.857331   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.857504   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.857511   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 19:55:56.857661   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 19:55:56.857660   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	I0708 19:55:56.857810   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	I0708 19:55:57.093354   25689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 19:55:57.100070   25689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 19:55:57.100166   25689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 19:55:57.116867   25689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 19:55:57.116898   25689 start.go:494] detecting cgroup driver to use...
	I0708 19:55:57.116969   25689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 19:55:57.135272   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 19:55:57.152746   25689 docker.go:217] disabling cri-docker service (if available) ...
	I0708 19:55:57.152806   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 19:55:57.169544   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 19:55:57.184676   25689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 19:55:57.306676   25689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 19:55:57.455741   25689 docker.go:233] disabling docker service ...
	I0708 19:55:57.455814   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 19:55:57.471241   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 19:55:57.484940   25689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 19:55:57.625933   25689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 19:55:57.749504   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 19:55:57.763929   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 19:55:57.783042   25689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 19:55:57.783100   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.793433   25689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 19:55:57.793498   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.803935   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.814024   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.824385   25689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 19:55:57.835327   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.846638   25689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.864310   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:55:57.875470   25689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 19:55:57.885159   25689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 19:55:57.885230   25689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 19:55:57.899496   25689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 19:55:57.909743   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:55:58.039190   25689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 19:55:58.180523   25689 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 19:55:58.180599   25689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 19:55:58.185712   25689 start.go:562] Will wait 60s for crictl version
	I0708 19:55:58.185775   25689 ssh_runner.go:195] Run: which crictl
	I0708 19:55:58.189767   25689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 19:55:58.230255   25689 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 19:55:58.230350   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:55:58.259882   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:55:58.291237   25689 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 19:55:58.293111   25689 out.go:177]   - env NO_PROXY=192.168.39.33
	I0708 19:55:58.294387   25689 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 19:55:58.297301   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:58.297612   25689 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:55:46 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 19:55:58.297640   25689 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 19:55:58.297811   25689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 19:55:58.301994   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:55:58.314364   25689 mustload.go:65] Loading cluster: ha-511021
	I0708 19:55:58.314543   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:55:58.314774   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:58.314799   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:58.329140   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39711
	I0708 19:55:58.329526   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:58.329966   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:58.329982   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:58.330513   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:58.330705   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:55:58.332263   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:55:58.332541   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:55:58.332570   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:55:58.348547   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39075
	I0708 19:55:58.348930   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:55:58.349354   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:55:58.349373   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:55:58.349658   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:55:58.349842   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:55:58.349980   25689 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021 for IP: 192.168.39.216
	I0708 19:55:58.349992   25689 certs.go:194] generating shared ca certs ...
	I0708 19:55:58.350010   25689 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:58.350149   25689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 19:55:58.350205   25689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 19:55:58.350219   25689 certs.go:256] generating profile certs ...
	I0708 19:55:58.350404   25689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key
	I0708 19:55:58.350442   25689 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.9d499452
	I0708 19:55:58.350462   25689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.9d499452 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.33 192.168.39.216 192.168.39.254]
	I0708 19:55:58.488883   25689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.9d499452 ...
	I0708 19:55:58.488912   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.9d499452: {Name:mke2c1acf56b5fe06b7700caff32ef7d088bced9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:58.489077   25689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.9d499452 ...
	I0708 19:55:58.489092   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.9d499452: {Name:mk25c9e786a144c25fe333b8e79bf36398614c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:55:58.489158   25689 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.9d499452 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt
	I0708 19:55:58.489281   25689 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.9d499452 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key
	I0708 19:55:58.489398   25689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key
	I0708 19:55:58.489412   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 19:55:58.489424   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 19:55:58.489434   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 19:55:58.489444   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 19:55:58.489456   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 19:55:58.489466   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 19:55:58.489477   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 19:55:58.489486   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 19:55:58.489557   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 19:55:58.489589   25689 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 19:55:58.489598   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 19:55:58.489618   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 19:55:58.489639   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 19:55:58.489661   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 19:55:58.489702   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:55:58.489729   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem -> /usr/share/ca-certificates/13141.pem
	I0708 19:55:58.489742   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /usr/share/ca-certificates/131412.pem
	I0708 19:55:58.489754   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:58.489782   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:55:58.492795   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:58.493194   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:55:58.493224   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:55:58.493404   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:55:58.493597   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:55:58.493727   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:55:58.493879   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:55:58.575804   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0708 19:55:58.581606   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0708 19:55:58.593359   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0708 19:55:58.597934   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0708 19:55:58.608339   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0708 19:55:58.612329   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0708 19:55:58.622599   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0708 19:55:58.627590   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0708 19:55:58.639969   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0708 19:55:58.645042   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0708 19:55:58.658608   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0708 19:55:58.663411   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0708 19:55:58.675636   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 19:55:58.703616   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 19:55:58.731091   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 19:55:58.758232   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 19:55:58.786887   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0708 19:55:58.813878   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 19:55:58.841425   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 19:55:58.866914   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 19:55:58.892897   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 19:55:58.919388   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 19:55:58.946031   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 19:55:58.972792   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0708 19:55:58.992597   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0708 19:55:59.012052   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0708 19:55:59.030024   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0708 19:55:59.047587   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0708 19:55:59.065891   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0708 19:55:59.084561   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0708 19:55:59.102698   25689 ssh_runner.go:195] Run: openssl version
	I0708 19:55:59.108854   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 19:55:59.120506   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 19:55:59.125400   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 19:55:59.125468   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 19:55:59.132125   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 19:55:59.143357   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 19:55:59.154995   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:59.159827   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:59.159893   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:55:59.166112   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 19:55:59.177755   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 19:55:59.188869   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 19:55:59.193502   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 19:55:59.193560   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 19:55:59.199432   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 19:55:59.210498   25689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 19:55:59.214711   25689 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 19:55:59.214764   25689 kubeadm.go:928] updating node {m02 192.168.39.216 8443 v1.30.2 crio true true} ...
	I0708 19:55:59.214833   25689 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-511021-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 19:55:59.214857   25689 kube-vip.go:115] generating kube-vip config ...
	I0708 19:55:59.214891   25689 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0708 19:55:59.233565   25689 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0708 19:55:59.233649   25689 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0708 19:55:59.233712   25689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 19:55:59.244383   25689 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0708 19:55:59.244443   25689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0708 19:55:59.256531   25689 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0708 19:55:59.256559   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0708 19:55:59.256616   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0708 19:55:59.256653   25689 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0708 19:55:59.256682   25689 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0708 19:55:59.261172   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0708 19:55:59.261195   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0708 19:55:59.796847   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0708 19:55:59.796925   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0708 19:55:59.802130   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0708 19:55:59.802174   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0708 19:56:00.118058   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 19:56:00.133338   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0708 19:56:00.133447   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0708 19:56:00.137922   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0708 19:56:00.137960   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0708 19:56:00.572541   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0708 19:56:00.582919   25689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0708 19:56:00.601072   25689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 19:56:00.619999   25689 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0708 19:56:00.638081   25689 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0708 19:56:00.642218   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:56:00.657388   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:56:00.780308   25689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:56:00.798165   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:56:00.798622   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:56:00.798672   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:56:00.813316   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0708 19:56:00.813753   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:56:00.814217   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:56:00.814235   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:56:00.814594   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:56:00.814804   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:56:00.814972   25689 start.go:316] joinCluster: &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:56:00.815056   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0708 19:56:00.815077   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:56:00.817849   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:56:00.818257   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:56:00.818286   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:56:00.818501   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:56:00.818674   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:56:00.818849   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:56:00.819044   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:56:00.984474   25689 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:56:00.984518   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7yvtjh.6r8fpit8xu0pxizs --discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-511021-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443"
	I0708 19:56:24.646578   25689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7yvtjh.6r8fpit8xu0pxizs --discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-511021-m02 --control-plane --apiserver-advertise-address=192.168.39.216 --apiserver-bind-port=8443": (23.662035066s)
	I0708 19:56:24.646617   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0708 19:56:25.243165   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-511021-m02 minikube.k8s.io/updated_at=2024_07_08T19_56_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=ha-511021 minikube.k8s.io/primary=false
	I0708 19:56:25.378158   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-511021-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0708 19:56:25.496399   25689 start.go:318] duration metric: took 24.6814294s to joinCluster
	I0708 19:56:25.496469   25689 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:56:25.496727   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:56:25.498340   25689 out.go:177] * Verifying Kubernetes components...
	I0708 19:56:25.499715   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:56:25.826747   25689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:56:25.905596   25689 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:56:25.905928   25689 kapi.go:59] client config for ha-511021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key", CAFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfdf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0708 19:56:25.906011   25689 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.33:8443
	I0708 19:56:25.906292   25689 node_ready.go:35] waiting up to 6m0s for node "ha-511021-m02" to be "Ready" ...
	I0708 19:56:25.906381   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:25.906391   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:25.906402   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:25.906410   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:25.920724   25689 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0708 19:56:26.407011   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:26.407037   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:26.407048   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:26.407055   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:26.437160   25689 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0708 19:56:26.907246   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:26.907268   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:26.907278   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:26.907283   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:26.912369   25689 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 19:56:27.407268   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:27.407289   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:27.407300   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:27.407308   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:27.410994   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:27.907201   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:27.907221   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:27.907229   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:27.907233   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:27.911148   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:27.911916   25689 node_ready.go:53] node "ha-511021-m02" has status "Ready":"False"
	I0708 19:56:28.406890   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:28.406908   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.406916   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.406919   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.412364   25689 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 19:56:28.907391   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:28.907408   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.907416   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.907420   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.912117   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:28.912669   25689 node_ready.go:49] node "ha-511021-m02" has status "Ready":"True"
	I0708 19:56:28.912685   25689 node_ready.go:38] duration metric: took 3.006371704s for node "ha-511021-m02" to be "Ready" ...
	I0708 19:56:28.912692   25689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:56:28.912758   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:56:28.912769   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.912778   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.912783   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.917610   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:28.925229   25689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.925311   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4lzjf
	I0708 19:56:28.925319   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.925326   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.925332   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.932858   25689 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0708 19:56:28.933481   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:28.933496   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.933503   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.933507   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.938196   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:28.938640   25689 pod_ready.go:92] pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:28.938655   25689 pod_ready.go:81] duration metric: took 13.40159ms for pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.938664   25689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.938717   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w6m9c
	I0708 19:56:28.938724   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.938731   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.938734   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.944566   25689 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 19:56:28.945307   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:28.945327   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.945337   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.945342   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.949220   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:28.949786   25689 pod_ready.go:92] pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:28.949806   25689 pod_ready.go:81] duration metric: took 11.135851ms for pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.949816   25689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.949867   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021
	I0708 19:56:28.949874   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.949883   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.949889   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.953241   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:28.953724   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:28.953739   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.953749   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.953753   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.956410   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:28.956940   25689 pod_ready.go:92] pod "etcd-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:28.956956   25689 pod_ready.go:81] duration metric: took 7.134034ms for pod "etcd-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.956970   25689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:28.957021   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:28.957029   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.957035   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.957038   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.959753   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:28.960270   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:28.960282   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:28.960289   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:28.960294   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:28.963659   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:29.457237   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:29.457258   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:29.457266   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:29.457271   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:29.460685   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:29.461279   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:29.461296   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:29.461304   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:29.461309   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:29.464229   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:29.958030   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:29.958054   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:29.958062   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:29.958066   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:29.962162   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:29.963050   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:29.963068   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:29.963079   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:29.963086   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:29.971876   25689 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0708 19:56:30.457159   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:30.457180   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:30.457186   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:30.457191   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:30.461184   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:30.461939   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:30.461953   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:30.461960   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:30.461965   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:30.464513   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:30.957877   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:30.957898   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:30.957908   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:30.957912   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:30.960807   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:30.961423   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:30.961439   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:30.961448   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:30.961454   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:30.963755   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:30.964245   25689 pod_ready.go:102] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"False"
	I0708 19:56:31.457384   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:31.457402   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:31.457410   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:31.457414   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:31.461633   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:31.462860   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:31.462875   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:31.462882   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:31.462887   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:31.466276   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:31.957965   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:31.957988   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:31.957999   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:31.958004   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:31.961523   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:31.962208   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:31.962230   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:31.962237   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:31.962241   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:31.964883   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:32.457861   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:32.457880   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:32.457888   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:32.457893   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:32.461623   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:32.462831   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:32.462846   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:32.462853   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:32.462856   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:32.465541   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:32.957608   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:32.957629   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:32.957637   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:32.957643   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:32.960569   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:32.961442   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:32.961462   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:32.961469   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:32.961473   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:32.964325   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:32.965059   25689 pod_ready.go:102] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"False"
	I0708 19:56:33.457240   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:33.457262   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:33.457270   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:33.457274   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:33.460238   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:33.460958   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:33.460974   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:33.460981   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:33.460984   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:33.463583   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:33.957976   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:33.958004   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:33.958017   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:33.958024   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:33.961284   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:33.962132   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:33.962148   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:33.962155   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:33.962159   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:33.971565   25689 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0708 19:56:34.457943   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:34.457963   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:34.457971   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:34.457974   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:34.460423   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:34.461215   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:34.461228   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:34.461235   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:34.461241   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:34.463519   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:34.958234   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:34.958260   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:34.958270   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:34.958275   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:34.961837   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:34.962627   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:34.962639   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:34.962646   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:34.962649   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:34.965512   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:34.966027   25689 pod_ready.go:102] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"False"
	I0708 19:56:35.457449   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:35.457470   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:35.457478   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:35.457482   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:35.462187   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:35.462747   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:35.462766   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:35.462774   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:35.462778   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:35.465123   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:35.957196   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:35.957219   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:35.957227   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:35.957231   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:35.960989   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:35.961703   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:35.961717   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:35.961724   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:35.961727   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:35.964654   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:36.457563   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:36.457585   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:36.457593   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:36.457598   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:36.460947   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:36.461856   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:36.461872   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:36.461883   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:36.461888   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:36.465329   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:36.957515   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:36.957538   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:36.957547   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:36.957552   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:36.960499   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:36.961013   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:36.961026   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:36.961036   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:36.961043   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:36.963318   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:37.457943   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:37.457963   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:37.457971   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:37.457974   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:37.461341   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:37.461969   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:37.461982   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:37.461990   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:37.461994   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:37.464663   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:37.465295   25689 pod_ready.go:102] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"False"
	I0708 19:56:37.958043   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:37.958065   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:37.958073   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:37.958077   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:37.962163   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:37.962751   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:37.962763   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:37.962771   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:37.962776   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:37.965420   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:38.457426   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:38.457448   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:38.457456   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:38.457461   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:38.461786   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:38.462829   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:38.462849   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:38.462861   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:38.462869   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:38.465874   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:38.957927   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:38.957952   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:38.957963   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:38.957969   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:38.961903   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:38.962559   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:38.962574   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:38.962581   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:38.962585   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:38.965818   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:39.457844   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:39.457866   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:39.457873   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:39.457877   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:39.461052   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:39.461888   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:39.461907   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:39.461917   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:39.461923   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:39.464623   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:39.957240   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:39.957261   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:39.957269   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:39.957275   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:39.961320   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:39.962029   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:39.962046   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:39.962063   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:39.962069   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:39.964966   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:39.965582   25689 pod_ready.go:102] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"False"
	I0708 19:56:40.458018   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:40.458039   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:40.458050   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:40.458056   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:40.461399   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:40.462016   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:40.462033   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:40.462043   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:40.462048   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:40.464946   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:40.957232   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:40.957249   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:40.957257   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:40.957261   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:40.961326   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:40.962034   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:40.962047   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:40.962054   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:40.962059   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:40.965054   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:41.457910   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:41.457930   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:41.457937   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:41.457942   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:41.461521   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:41.462272   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:41.462285   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:41.462292   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:41.462298   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:41.464954   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:41.957133   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:41.957154   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:41.957161   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:41.957167   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:41.960945   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:41.961842   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:41.961857   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:41.961865   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:41.961868   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:41.964417   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.457263   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:56:42.457286   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.457294   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.457298   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.460751   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:42.461505   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:42.461518   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.461525   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.461530   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.464456   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.465281   25689 pod_ready.go:92] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.465297   25689 pod_ready.go:81] duration metric: took 13.508321083s for pod "etcd-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.465311   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.465355   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021
	I0708 19:56:42.465362   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.465369   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.465373   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.468275   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.468875   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:42.468891   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.468898   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.468900   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.471097   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.471559   25689 pod_ready.go:92] pod "kube-apiserver-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.471575   25689 pod_ready.go:81] duration metric: took 6.259ms for pod "kube-apiserver-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.471583   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.471628   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m02
	I0708 19:56:42.471636   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.471642   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.471645   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.474045   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.475028   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:42.475050   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.475057   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.475063   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.477184   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.477671   25689 pod_ready.go:92] pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.477687   25689 pod_ready.go:81] duration metric: took 6.098489ms for pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.477695   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.477758   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021
	I0708 19:56:42.477766   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.477773   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.477777   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.479977   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.480456   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:42.480468   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.480475   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.480478   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.482861   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.483338   25689 pod_ready.go:92] pod "kube-controller-manager-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.483355   25689 pod_ready.go:81] duration metric: took 5.653907ms for pod "kube-controller-manager-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.483364   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.483425   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m02
	I0708 19:56:42.483435   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.483465   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.483477   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.486028   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.486998   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:42.487014   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.487021   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.487027   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.489165   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:42.489813   25689 pod_ready.go:92] pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.489841   25689 pod_ready.go:81] duration metric: took 6.459082ms for pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.489854   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-976tb" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.658256   25689 request.go:629] Waited for 168.328911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-976tb
	I0708 19:56:42.658308   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-976tb
	I0708 19:56:42.658313   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.658320   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.658324   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.661739   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:42.857764   25689 request.go:629] Waited for 195.466462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:42.857835   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:42.857841   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:42.857850   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:42.857860   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:42.861038   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:42.861784   25689 pod_ready.go:92] pod "kube-proxy-976tb" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:42.861805   25689 pod_ready.go:81] duration metric: took 371.940121ms for pod "kube-proxy-976tb" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:42.861819   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmkjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:43.057930   25689 request.go:629] Waited for 196.046623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmkjf
	I0708 19:56:43.058009   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmkjf
	I0708 19:56:43.058022   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:43.058032   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:43.058042   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:43.062026   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:43.257353   25689 request.go:629] Waited for 194.208854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:43.257424   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:43.257432   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:43.257442   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:43.257446   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:43.260627   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:43.261256   25689 pod_ready.go:92] pod "kube-proxy-tmkjf" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:43.261275   25689 pod_ready.go:81] duration metric: took 399.449111ms for pod "kube-proxy-tmkjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:43.261287   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:43.458216   25689 request.go:629] Waited for 196.846469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021
	I0708 19:56:43.458297   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021
	I0708 19:56:43.458304   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:43.458318   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:43.458330   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:43.461848   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:43.657812   25689 request.go:629] Waited for 195.372871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:43.657888   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:56:43.657897   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:43.657905   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:43.657911   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:43.661064   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:43.661857   25689 pod_ready.go:92] pod "kube-scheduler-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:43.661879   25689 pod_ready.go:81] duration metric: took 400.583933ms for pod "kube-scheduler-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:43.661892   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:43.857959   25689 request.go:629] Waited for 196.003992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m02
	I0708 19:56:43.858020   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m02
	I0708 19:56:43.858025   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:43.858032   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:43.858040   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:43.861046   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:56:44.057987   25689 request.go:629] Waited for 196.36324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:44.058072   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:56:44.058081   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.058092   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.058097   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.061538   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:44.062231   25689 pod_ready.go:92] pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:56:44.062251   25689 pod_ready.go:81] duration metric: took 400.352378ms for pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:56:44.062265   25689 pod_ready.go:38] duration metric: took 15.149561086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:56:44.062283   25689 api_server.go:52] waiting for apiserver process to appear ...
	I0708 19:56:44.062342   25689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 19:56:44.079005   25689 api_server.go:72] duration metric: took 18.582500945s to wait for apiserver process to appear ...
	I0708 19:56:44.079035   25689 api_server.go:88] waiting for apiserver healthz status ...
	I0708 19:56:44.079055   25689 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I0708 19:56:44.085015   25689 api_server.go:279] https://192.168.39.33:8443/healthz returned 200:
	ok
	I0708 19:56:44.085079   25689 round_trippers.go:463] GET https://192.168.39.33:8443/version
	I0708 19:56:44.085091   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.085101   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.085107   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.085909   25689 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 19:56:44.086045   25689 api_server.go:141] control plane version: v1.30.2
	I0708 19:56:44.086065   25689 api_server.go:131] duration metric: took 7.022616ms to wait for apiserver health ...
	I0708 19:56:44.086073   25689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 19:56:44.257297   25689 request.go:629] Waited for 171.152608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:56:44.257346   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:56:44.257354   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.257361   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.257368   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.262269   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:56:44.266549   25689 system_pods.go:59] 17 kube-system pods found
	I0708 19:56:44.266576   25689 system_pods.go:61] "coredns-7db6d8ff4d-4lzjf" [4bcfc11d-8368-4c95-bf64-5b3d09c4b455] Running
	I0708 19:56:44.266581   25689 system_pods.go:61] "coredns-7db6d8ff4d-w6m9c" [8f45dd66-3096-4878-8b2b-96dcf12bbef2] Running
	I0708 19:56:44.266586   25689 system_pods.go:61] "etcd-ha-511021" [52134689-3a05-4bfa-ae28-2696f8bf0ccb] Running
	I0708 19:56:44.266590   25689 system_pods.go:61] "etcd-ha-511021-m02" [acc2d6d9-6796-453d-a5bb-492c28c5eb94] Running
	I0708 19:56:44.266593   25689 system_pods.go:61] "kindnet-4f49v" [1f0b50ca-73cb-4ffb-9676-09e3a28d7636] Running
	I0708 19:56:44.266596   25689 system_pods.go:61] "kindnet-gn8kn" [68f966e1-e40c-4e6e-8fa4-d3167090fa7c] Running
	I0708 19:56:44.266599   25689 system_pods.go:61] "kube-apiserver-ha-511021" [e5f0c179-18b9-40ce-9c9c-bfe810f6a422] Running
	I0708 19:56:44.266602   25689 system_pods.go:61] "kube-apiserver-ha-511021-m02" [33e08ded-e75f-4f56-8d52-5447d025d348] Running
	I0708 19:56:44.266606   25689 system_pods.go:61] "kube-controller-manager-ha-511021" [136879af-0997-416e-956a-632e940e1da6] Running
	I0708 19:56:44.266609   25689 system_pods.go:61] "kube-controller-manager-ha-511021-m02" [a5d3e392-c4f1-4784-b234-e57a5e9689a9] Running
	I0708 19:56:44.266611   25689 system_pods.go:61] "kube-proxy-976tb" [97fd998d-9281-40b0-bd6d-cebf8d4bfa02] Running
	I0708 19:56:44.266614   25689 system_pods.go:61] "kube-proxy-tmkjf" [fb7c00aa-f846-430e-92a2-04cd2fc8a62b] Running
	I0708 19:56:44.266617   25689 system_pods.go:61] "kube-scheduler-ha-511021" [978f9f3f-1bfe-4d9c-9dcf-5a410f101c87] Running
	I0708 19:56:44.266620   25689 system_pods.go:61] "kube-scheduler-ha-511021-m02" [3a4313c1-625d-4ba1-873f-da3ae493f1b5] Running
	I0708 19:56:44.266623   25689 system_pods.go:61] "kube-vip-ha-511021" [c2d1c07a-51ae-4264-9fbc-fd7af40ac2d0] Running
	I0708 19:56:44.266628   25689 system_pods.go:61] "kube-vip-ha-511021-m02" [ebc968ae-70c7-45ac-aa9b-ddc9e7142f71] Running
	I0708 19:56:44.266633   25689 system_pods.go:61] "storage-provisioner" [7d02def4-3af1-4268-a8fa-072c6fd71c83] Running
	I0708 19:56:44.266638   25689 system_pods.go:74] duration metric: took 180.557225ms to wait for pod list to return data ...
	I0708 19:56:44.266647   25689 default_sa.go:34] waiting for default service account to be created ...
	I0708 19:56:44.458065   25689 request.go:629] Waited for 191.353602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/default/serviceaccounts
	I0708 19:56:44.458123   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/default/serviceaccounts
	I0708 19:56:44.458131   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.458142   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.458151   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.461390   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:44.461671   25689 default_sa.go:45] found service account: "default"
	I0708 19:56:44.461692   25689 default_sa.go:55] duration metric: took 195.038543ms for default service account to be created ...
	I0708 19:56:44.461703   25689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 19:56:44.657832   25689 request.go:629] Waited for 196.060395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:56:44.657907   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:56:44.657919   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.657930   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.657937   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.663091   25689 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 19:56:44.667327   25689 system_pods.go:86] 17 kube-system pods found
	I0708 19:56:44.667349   25689 system_pods.go:89] "coredns-7db6d8ff4d-4lzjf" [4bcfc11d-8368-4c95-bf64-5b3d09c4b455] Running
	I0708 19:56:44.667355   25689 system_pods.go:89] "coredns-7db6d8ff4d-w6m9c" [8f45dd66-3096-4878-8b2b-96dcf12bbef2] Running
	I0708 19:56:44.667359   25689 system_pods.go:89] "etcd-ha-511021" [52134689-3a05-4bfa-ae28-2696f8bf0ccb] Running
	I0708 19:56:44.667363   25689 system_pods.go:89] "etcd-ha-511021-m02" [acc2d6d9-6796-453d-a5bb-492c28c5eb94] Running
	I0708 19:56:44.667367   25689 system_pods.go:89] "kindnet-4f49v" [1f0b50ca-73cb-4ffb-9676-09e3a28d7636] Running
	I0708 19:56:44.667371   25689 system_pods.go:89] "kindnet-gn8kn" [68f966e1-e40c-4e6e-8fa4-d3167090fa7c] Running
	I0708 19:56:44.667375   25689 system_pods.go:89] "kube-apiserver-ha-511021" [e5f0c179-18b9-40ce-9c9c-bfe810f6a422] Running
	I0708 19:56:44.667379   25689 system_pods.go:89] "kube-apiserver-ha-511021-m02" [33e08ded-e75f-4f56-8d52-5447d025d348] Running
	I0708 19:56:44.667384   25689 system_pods.go:89] "kube-controller-manager-ha-511021" [136879af-0997-416e-956a-632e940e1da6] Running
	I0708 19:56:44.667388   25689 system_pods.go:89] "kube-controller-manager-ha-511021-m02" [a5d3e392-c4f1-4784-b234-e57a5e9689a9] Running
	I0708 19:56:44.667391   25689 system_pods.go:89] "kube-proxy-976tb" [97fd998d-9281-40b0-bd6d-cebf8d4bfa02] Running
	I0708 19:56:44.667395   25689 system_pods.go:89] "kube-proxy-tmkjf" [fb7c00aa-f846-430e-92a2-04cd2fc8a62b] Running
	I0708 19:56:44.667398   25689 system_pods.go:89] "kube-scheduler-ha-511021" [978f9f3f-1bfe-4d9c-9dcf-5a410f101c87] Running
	I0708 19:56:44.667402   25689 system_pods.go:89] "kube-scheduler-ha-511021-m02" [3a4313c1-625d-4ba1-873f-da3ae493f1b5] Running
	I0708 19:56:44.667405   25689 system_pods.go:89] "kube-vip-ha-511021" [c2d1c07a-51ae-4264-9fbc-fd7af40ac2d0] Running
	I0708 19:56:44.667410   25689 system_pods.go:89] "kube-vip-ha-511021-m02" [ebc968ae-70c7-45ac-aa9b-ddc9e7142f71] Running
	I0708 19:56:44.667414   25689 system_pods.go:89] "storage-provisioner" [7d02def4-3af1-4268-a8fa-072c6fd71c83] Running
	I0708 19:56:44.667421   25689 system_pods.go:126] duration metric: took 205.709311ms to wait for k8s-apps to be running ...
	I0708 19:56:44.667431   25689 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 19:56:44.667495   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 19:56:44.683752   25689 system_svc.go:56] duration metric: took 16.313272ms WaitForService to wait for kubelet
	I0708 19:56:44.683777   25689 kubeadm.go:576] duration metric: took 19.187277697s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 19:56:44.683793   25689 node_conditions.go:102] verifying NodePressure condition ...
	I0708 19:56:44.857511   25689 request.go:629] Waited for 173.65485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes
	I0708 19:56:44.857556   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes
	I0708 19:56:44.857580   25689 round_trippers.go:469] Request Headers:
	I0708 19:56:44.857590   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:56:44.857597   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:56:44.861147   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:56:44.862054   25689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:56:44.862081   25689 node_conditions.go:123] node cpu capacity is 2
	I0708 19:56:44.862095   25689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:56:44.862099   25689 node_conditions.go:123] node cpu capacity is 2
	I0708 19:56:44.862103   25689 node_conditions.go:105] duration metric: took 178.305226ms to run NodePressure ...
	I0708 19:56:44.862112   25689 start.go:240] waiting for startup goroutines ...
	I0708 19:56:44.862138   25689 start.go:254] writing updated cluster config ...
	I0708 19:56:44.864101   25689 out.go:177] 
	I0708 19:56:44.865447   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:56:44.865533   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:56:44.866955   25689 out.go:177] * Starting "ha-511021-m03" control-plane node in "ha-511021" cluster
	I0708 19:56:44.867966   25689 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 19:56:44.867985   25689 cache.go:56] Caching tarball of preloaded images
	I0708 19:56:44.868084   25689 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 19:56:44.868097   25689 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 19:56:44.868191   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:56:44.868369   25689 start.go:360] acquireMachinesLock for ha-511021-m03: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 19:56:44.868416   25689 start.go:364] duration metric: took 26.562µs to acquireMachinesLock for "ha-511021-m03"
	I0708 19:56:44.868439   25689 start.go:93] Provisioning new machine with config: &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:56:44.868539   25689 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0708 19:56:44.869965   25689 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 19:56:44.870070   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:56:44.870101   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:56:44.886541   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0708 19:56:44.886928   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:56:44.887333   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:56:44.887352   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:56:44.887678   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:56:44.887821   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetMachineName
	I0708 19:56:44.887950   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:56:44.888103   25689 start.go:159] libmachine.API.Create for "ha-511021" (driver="kvm2")
	I0708 19:56:44.888137   25689 client.go:168] LocalClient.Create starting
	I0708 19:56:44.888174   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 19:56:44.888208   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:56:44.888227   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:56:44.888354   25689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 19:56:44.888393   25689 main.go:141] libmachine: Decoding PEM data...
	I0708 19:56:44.888410   25689 main.go:141] libmachine: Parsing certificate...
	I0708 19:56:44.888450   25689 main.go:141] libmachine: Running pre-create checks...
	I0708 19:56:44.888463   25689 main.go:141] libmachine: (ha-511021-m03) Calling .PreCreateCheck
	I0708 19:56:44.888624   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetConfigRaw
	I0708 19:56:44.889028   25689 main.go:141] libmachine: Creating machine...
	I0708 19:56:44.889043   25689 main.go:141] libmachine: (ha-511021-m03) Calling .Create
	I0708 19:56:44.889149   25689 main.go:141] libmachine: (ha-511021-m03) Creating KVM machine...
	I0708 19:56:44.890401   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found existing default KVM network
	I0708 19:56:44.890531   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found existing private KVM network mk-ha-511021
	I0708 19:56:44.890628   25689 main.go:141] libmachine: (ha-511021-m03) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03 ...
	I0708 19:56:44.890659   25689 main.go:141] libmachine: (ha-511021-m03) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 19:56:44.890702   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:44.890623   26453 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:56:44.890791   25689 main.go:141] libmachine: (ha-511021-m03) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 19:56:45.108556   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:45.108427   26453 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa...
	I0708 19:56:45.347415   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:45.347305   26453 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/ha-511021-m03.rawdisk...
	I0708 19:56:45.347464   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Writing magic tar header
	I0708 19:56:45.347479   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Writing SSH key tar header
	I0708 19:56:45.347531   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:45.347475   26453 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03 ...
	I0708 19:56:45.347614   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03
	I0708 19:56:45.347642   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 19:56:45.347652   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:56:45.347664   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 19:56:45.347672   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 19:56:45.347683   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03 (perms=drwx------)
	I0708 19:56:45.347695   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 19:56:45.347710   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 19:56:45.347726   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home/jenkins
	I0708 19:56:45.347740   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Checking permissions on dir: /home
	I0708 19:56:45.347748   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Skipping /home - not owner
	I0708 19:56:45.347761   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 19:56:45.347773   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 19:56:45.347785   25689 main.go:141] libmachine: (ha-511021-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 19:56:45.347794   25689 main.go:141] libmachine: (ha-511021-m03) Creating domain...
	I0708 19:56:45.348887   25689 main.go:141] libmachine: (ha-511021-m03) define libvirt domain using xml: 
	I0708 19:56:45.348905   25689 main.go:141] libmachine: (ha-511021-m03) <domain type='kvm'>
	I0708 19:56:45.348913   25689 main.go:141] libmachine: (ha-511021-m03)   <name>ha-511021-m03</name>
	I0708 19:56:45.348918   25689 main.go:141] libmachine: (ha-511021-m03)   <memory unit='MiB'>2200</memory>
	I0708 19:56:45.348924   25689 main.go:141] libmachine: (ha-511021-m03)   <vcpu>2</vcpu>
	I0708 19:56:45.348930   25689 main.go:141] libmachine: (ha-511021-m03)   <features>
	I0708 19:56:45.348935   25689 main.go:141] libmachine: (ha-511021-m03)     <acpi/>
	I0708 19:56:45.348944   25689 main.go:141] libmachine: (ha-511021-m03)     <apic/>
	I0708 19:56:45.348948   25689 main.go:141] libmachine: (ha-511021-m03)     <pae/>
	I0708 19:56:45.348958   25689 main.go:141] libmachine: (ha-511021-m03)     
	I0708 19:56:45.348981   25689 main.go:141] libmachine: (ha-511021-m03)   </features>
	I0708 19:56:45.348999   25689 main.go:141] libmachine: (ha-511021-m03)   <cpu mode='host-passthrough'>
	I0708 19:56:45.349005   25689 main.go:141] libmachine: (ha-511021-m03)   
	I0708 19:56:45.349011   25689 main.go:141] libmachine: (ha-511021-m03)   </cpu>
	I0708 19:56:45.349020   25689 main.go:141] libmachine: (ha-511021-m03)   <os>
	I0708 19:56:45.349031   25689 main.go:141] libmachine: (ha-511021-m03)     <type>hvm</type>
	I0708 19:56:45.349041   25689 main.go:141] libmachine: (ha-511021-m03)     <boot dev='cdrom'/>
	I0708 19:56:45.349052   25689 main.go:141] libmachine: (ha-511021-m03)     <boot dev='hd'/>
	I0708 19:56:45.349064   25689 main.go:141] libmachine: (ha-511021-m03)     <bootmenu enable='no'/>
	I0708 19:56:45.349070   25689 main.go:141] libmachine: (ha-511021-m03)   </os>
	I0708 19:56:45.349075   25689 main.go:141] libmachine: (ha-511021-m03)   <devices>
	I0708 19:56:45.349089   25689 main.go:141] libmachine: (ha-511021-m03)     <disk type='file' device='cdrom'>
	I0708 19:56:45.349099   25689 main.go:141] libmachine: (ha-511021-m03)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/boot2docker.iso'/>
	I0708 19:56:45.349106   25689 main.go:141] libmachine: (ha-511021-m03)       <target dev='hdc' bus='scsi'/>
	I0708 19:56:45.349113   25689 main.go:141] libmachine: (ha-511021-m03)       <readonly/>
	I0708 19:56:45.349123   25689 main.go:141] libmachine: (ha-511021-m03)     </disk>
	I0708 19:56:45.349135   25689 main.go:141] libmachine: (ha-511021-m03)     <disk type='file' device='disk'>
	I0708 19:56:45.349147   25689 main.go:141] libmachine: (ha-511021-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 19:56:45.349161   25689 main.go:141] libmachine: (ha-511021-m03)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/ha-511021-m03.rawdisk'/>
	I0708 19:56:45.349171   25689 main.go:141] libmachine: (ha-511021-m03)       <target dev='hda' bus='virtio'/>
	I0708 19:56:45.349177   25689 main.go:141] libmachine: (ha-511021-m03)     </disk>
	I0708 19:56:45.349184   25689 main.go:141] libmachine: (ha-511021-m03)     <interface type='network'>
	I0708 19:56:45.349217   25689 main.go:141] libmachine: (ha-511021-m03)       <source network='mk-ha-511021'/>
	I0708 19:56:45.349241   25689 main.go:141] libmachine: (ha-511021-m03)       <model type='virtio'/>
	I0708 19:56:45.349253   25689 main.go:141] libmachine: (ha-511021-m03)     </interface>
	I0708 19:56:45.349265   25689 main.go:141] libmachine: (ha-511021-m03)     <interface type='network'>
	I0708 19:56:45.349278   25689 main.go:141] libmachine: (ha-511021-m03)       <source network='default'/>
	I0708 19:56:45.349290   25689 main.go:141] libmachine: (ha-511021-m03)       <model type='virtio'/>
	I0708 19:56:45.349313   25689 main.go:141] libmachine: (ha-511021-m03)     </interface>
	I0708 19:56:45.349335   25689 main.go:141] libmachine: (ha-511021-m03)     <serial type='pty'>
	I0708 19:56:45.349347   25689 main.go:141] libmachine: (ha-511021-m03)       <target port='0'/>
	I0708 19:56:45.349353   25689 main.go:141] libmachine: (ha-511021-m03)     </serial>
	I0708 19:56:45.349363   25689 main.go:141] libmachine: (ha-511021-m03)     <console type='pty'>
	I0708 19:56:45.349375   25689 main.go:141] libmachine: (ha-511021-m03)       <target type='serial' port='0'/>
	I0708 19:56:45.349387   25689 main.go:141] libmachine: (ha-511021-m03)     </console>
	I0708 19:56:45.349401   25689 main.go:141] libmachine: (ha-511021-m03)     <rng model='virtio'>
	I0708 19:56:45.349428   25689 main.go:141] libmachine: (ha-511021-m03)       <backend model='random'>/dev/random</backend>
	I0708 19:56:45.349448   25689 main.go:141] libmachine: (ha-511021-m03)     </rng>
	I0708 19:56:45.349460   25689 main.go:141] libmachine: (ha-511021-m03)     
	I0708 19:56:45.349468   25689 main.go:141] libmachine: (ha-511021-m03)     
	I0708 19:56:45.349475   25689 main.go:141] libmachine: (ha-511021-m03)   </devices>
	I0708 19:56:45.349482   25689 main.go:141] libmachine: (ha-511021-m03) </domain>
	I0708 19:56:45.349491   25689 main.go:141] libmachine: (ha-511021-m03) 
	I0708 19:56:45.356148   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:5c:5a:59 in network default
	I0708 19:56:45.356744   25689 main.go:141] libmachine: (ha-511021-m03) Ensuring networks are active...
	I0708 19:56:45.356770   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:45.357591   25689 main.go:141] libmachine: (ha-511021-m03) Ensuring network default is active
	I0708 19:56:45.357886   25689 main.go:141] libmachine: (ha-511021-m03) Ensuring network mk-ha-511021 is active
	I0708 19:56:45.358227   25689 main.go:141] libmachine: (ha-511021-m03) Getting domain xml...
	I0708 19:56:45.358881   25689 main.go:141] libmachine: (ha-511021-m03) Creating domain...
	I0708 19:56:46.618992   25689 main.go:141] libmachine: (ha-511021-m03) Waiting to get IP...
	I0708 19:56:46.619693   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:46.620162   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:46.620198   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:46.620133   26453 retry.go:31] will retry after 202.321963ms: waiting for machine to come up
	I0708 19:56:46.824561   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:46.825051   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:46.825077   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:46.824997   26453 retry.go:31] will retry after 306.03783ms: waiting for machine to come up
	I0708 19:56:47.132473   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:47.132887   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:47.132913   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:47.132847   26453 retry.go:31] will retry after 374.380364ms: waiting for machine to come up
	I0708 19:56:47.508241   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:47.508620   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:47.508650   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:47.508576   26453 retry.go:31] will retry after 424.568331ms: waiting for machine to come up
	I0708 19:56:47.935212   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:47.935636   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:47.935659   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:47.935572   26453 retry.go:31] will retry after 606.237869ms: waiting for machine to come up
	I0708 19:56:48.544043   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:48.544527   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:48.544594   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:48.544492   26453 retry.go:31] will retry after 739.656893ms: waiting for machine to come up
	I0708 19:56:49.285546   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:49.285947   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:49.285976   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:49.285897   26453 retry.go:31] will retry after 855.924967ms: waiting for machine to come up
	I0708 19:56:50.142964   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:50.143355   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:50.143382   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:50.143326   26453 retry.go:31] will retry after 1.301147226s: waiting for machine to come up
	I0708 19:56:51.446073   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:51.446554   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:51.446579   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:51.446512   26453 retry.go:31] will retry after 1.222212721s: waiting for machine to come up
	I0708 19:56:52.670715   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:52.671102   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:52.671129   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:52.671068   26453 retry.go:31] will retry after 1.712355758s: waiting for machine to come up
	I0708 19:56:54.386067   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:54.386567   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:54.386595   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:54.386515   26453 retry.go:31] will retry after 2.80539565s: waiting for machine to come up
	I0708 19:56:57.194500   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:56:57.194933   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:56:57.194961   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:56:57.194870   26453 retry.go:31] will retry after 2.897013176s: waiting for machine to come up
	I0708 19:57:00.093476   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:00.093952   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:57:00.093992   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:57:00.093908   26453 retry.go:31] will retry after 2.750912917s: waiting for machine to come up
	I0708 19:57:02.847826   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:02.848235   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find current IP address of domain ha-511021-m03 in network mk-ha-511021
	I0708 19:57:02.848256   25689 main.go:141] libmachine: (ha-511021-m03) DBG | I0708 19:57:02.848198   26453 retry.go:31] will retry after 5.060992583s: waiting for machine to come up
	I0708 19:57:07.913251   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:07.913644   25689 main.go:141] libmachine: (ha-511021-m03) Found IP for machine: 192.168.39.70
	I0708 19:57:07.913665   25689 main.go:141] libmachine: (ha-511021-m03) Reserving static IP address...
	I0708 19:57:07.913675   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has current primary IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:07.914029   25689 main.go:141] libmachine: (ha-511021-m03) DBG | unable to find host DHCP lease matching {name: "ha-511021-m03", mac: "52:54:00:a7:80:5b", ip: "192.168.39.70"} in network mk-ha-511021
	I0708 19:57:07.988510   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Getting to WaitForSSH function...
	I0708 19:57:07.988533   25689 main.go:141] libmachine: (ha-511021-m03) Reserved static IP address: 192.168.39.70
	I0708 19:57:07.988546   25689 main.go:141] libmachine: (ha-511021-m03) Waiting for SSH to be available...
	I0708 19:57:07.991237   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:07.991735   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:07.991766   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:07.991828   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Using SSH client type: external
	I0708 19:57:07.991853   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa (-rw-------)
	I0708 19:57:07.991885   25689 main.go:141] libmachine: (ha-511021-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 19:57:07.991897   25689 main.go:141] libmachine: (ha-511021-m03) DBG | About to run SSH command:
	I0708 19:57:07.991909   25689 main.go:141] libmachine: (ha-511021-m03) DBG | exit 0
	I0708 19:57:08.123650   25689 main.go:141] libmachine: (ha-511021-m03) DBG | SSH cmd err, output: <nil>: 
	I0708 19:57:08.123964   25689 main.go:141] libmachine: (ha-511021-m03) KVM machine creation complete!
	I0708 19:57:08.124211   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetConfigRaw
	I0708 19:57:08.124710   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:08.124897   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:08.125023   25689 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 19:57:08.125038   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 19:57:08.126437   25689 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 19:57:08.126453   25689 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 19:57:08.126460   25689 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 19:57:08.126469   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.128873   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.129279   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.129304   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.129461   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.129629   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.129935   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.130075   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.130261   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:08.130499   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:08.130513   25689 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 19:57:08.242960   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:57:08.242985   25689 main.go:141] libmachine: Detecting the provisioner...
	I0708 19:57:08.242996   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.246088   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.246487   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.246516   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.246644   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.246839   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.246986   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.247113   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.247285   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:08.247499   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:08.247514   25689 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 19:57:08.360156   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 19:57:08.360236   25689 main.go:141] libmachine: found compatible host: buildroot
	I0708 19:57:08.360246   25689 main.go:141] libmachine: Provisioning with buildroot...
	I0708 19:57:08.360254   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetMachineName
	I0708 19:57:08.360497   25689 buildroot.go:166] provisioning hostname "ha-511021-m03"
	I0708 19:57:08.360529   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetMachineName
	I0708 19:57:08.360714   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.363569   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.363920   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.363945   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.364095   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.364268   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.364445   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.364604   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.364765   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:08.364920   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:08.364938   25689 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-511021-m03 && echo "ha-511021-m03" | sudo tee /etc/hostname
	I0708 19:57:08.493966   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-511021-m03
	
	I0708 19:57:08.493989   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.496619   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.497015   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.497033   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.497274   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.497451   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.497608   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.497736   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.497905   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:08.498061   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:08.498076   25689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-511021-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-511021-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-511021-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 19:57:08.621154   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 19:57:08.621184   25689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 19:57:08.621203   25689 buildroot.go:174] setting up certificates
	I0708 19:57:08.621214   25689 provision.go:84] configureAuth start
	I0708 19:57:08.621225   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetMachineName
	I0708 19:57:08.621493   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 19:57:08.624180   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.624618   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.624645   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.624812   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.626647   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.627041   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.627063   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.627250   25689 provision.go:143] copyHostCerts
	I0708 19:57:08.627272   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:57:08.627299   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 19:57:08.627307   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 19:57:08.627378   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 19:57:08.627458   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:57:08.627482   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 19:57:08.627491   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 19:57:08.627517   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 19:57:08.627566   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:57:08.627582   25689 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 19:57:08.627588   25689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 19:57:08.627608   25689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 19:57:08.627653   25689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.ha-511021-m03 san=[127.0.0.1 192.168.39.70 ha-511021-m03 localhost minikube]
	I0708 19:57:08.709893   25689 provision.go:177] copyRemoteCerts
	I0708 19:57:08.709964   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 19:57:08.709992   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.713220   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.713630   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.713663   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.713839   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.714029   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.714234   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.714370   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 19:57:08.802081   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 19:57:08.802153   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 19:57:08.826514   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 19:57:08.826598   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0708 19:57:08.850785   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 19:57:08.850864   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 19:57:08.877535   25689 provision.go:87] duration metric: took 256.307129ms to configureAuth
	I0708 19:57:08.877566   25689 buildroot.go:189] setting minikube options for container-runtime
	I0708 19:57:08.877797   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:57:08.877882   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:08.880566   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.880976   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:08.881007   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:08.881173   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:08.881366   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.881548   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:08.881682   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:08.881850   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:08.882045   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:08.882066   25689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 19:57:09.161202   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 19:57:09.161235   25689 main.go:141] libmachine: Checking connection to Docker...
	I0708 19:57:09.161253   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetURL
	I0708 19:57:09.162410   25689 main.go:141] libmachine: (ha-511021-m03) DBG | Using libvirt version 6000000
	I0708 19:57:09.164876   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.165221   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.165247   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.165357   25689 main.go:141] libmachine: Docker is up and running!
	I0708 19:57:09.165373   25689 main.go:141] libmachine: Reticulating splines...
	I0708 19:57:09.165380   25689 client.go:171] duration metric: took 24.277232778s to LocalClient.Create
	I0708 19:57:09.165403   25689 start.go:167] duration metric: took 24.277302306s to libmachine.API.Create "ha-511021"
	I0708 19:57:09.165415   25689 start.go:293] postStartSetup for "ha-511021-m03" (driver="kvm2")
	I0708 19:57:09.165428   25689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 19:57:09.165448   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:09.165644   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 19:57:09.165664   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:09.167745   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.168020   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.168040   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.168196   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:09.168385   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:09.168535   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:09.168658   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 19:57:09.259553   25689 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 19:57:09.263695   25689 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 19:57:09.263720   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 19:57:09.263780   25689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 19:57:09.264006   25689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 19:57:09.264033   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /etc/ssl/certs/131412.pem
	I0708 19:57:09.264200   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 19:57:09.275837   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:57:09.303157   25689 start.go:296] duration metric: took 137.729964ms for postStartSetup
	I0708 19:57:09.303199   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetConfigRaw
	I0708 19:57:09.303835   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 19:57:09.306642   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.307136   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.307161   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.307485   25689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 19:57:09.307693   25689 start.go:128] duration metric: took 24.439141413s to createHost
	I0708 19:57:09.307716   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:09.310073   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.310482   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.310509   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.310692   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:09.310896   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:09.311038   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:09.311213   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:09.311468   25689 main.go:141] libmachine: Using SSH client type: native
	I0708 19:57:09.311663   25689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0708 19:57:09.311679   25689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 19:57:09.428333   25689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720468629.404388528
	
	I0708 19:57:09.428367   25689 fix.go:216] guest clock: 1720468629.404388528
	I0708 19:57:09.428378   25689 fix.go:229] Guest: 2024-07-08 19:57:09.404388528 +0000 UTC Remote: 2024-07-08 19:57:09.307705167 +0000 UTC m=+149.693400321 (delta=96.683361ms)
	I0708 19:57:09.428400   25689 fix.go:200] guest clock delta is within tolerance: 96.683361ms
	I0708 19:57:09.428408   25689 start.go:83] releasing machines lock for "ha-511021-m03", held for 24.559980204s
	I0708 19:57:09.428431   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:09.428694   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 19:57:09.431379   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.431749   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.431776   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.433988   25689 out.go:177] * Found network options:
	I0708 19:57:09.435267   25689 out.go:177]   - NO_PROXY=192.168.39.33,192.168.39.216
	W0708 19:57:09.436484   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	W0708 19:57:09.436507   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	I0708 19:57:09.436522   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:09.437152   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:09.437343   25689 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 19:57:09.437438   25689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 19:57:09.437473   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	W0708 19:57:09.437536   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	W0708 19:57:09.437559   25689 proxy.go:119] fail to check proxy env: Error ip not in block
	I0708 19:57:09.437621   25689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 19:57:09.437643   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 19:57:09.440477   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.440568   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.440793   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.440820   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.440952   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:09.440972   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:09.440989   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:09.441158   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 19:57:09.441174   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:09.441339   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:09.441352   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 19:57:09.441505   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 19:57:09.441501   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 19:57:09.441659   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 19:57:09.681469   25689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 19:57:09.687612   25689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 19:57:09.687692   25689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 19:57:09.704050   25689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 19:57:09.704073   25689 start.go:494] detecting cgroup driver to use...
	I0708 19:57:09.704129   25689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 19:57:09.720919   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 19:57:09.736474   25689 docker.go:217] disabling cri-docker service (if available) ...
	I0708 19:57:09.736540   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 19:57:09.751202   25689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 19:57:09.765460   25689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 19:57:09.890467   25689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 19:57:10.062358   25689 docker.go:233] disabling docker service ...
	I0708 19:57:10.062428   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 19:57:10.077177   25689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 19:57:10.090747   25689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 19:57:10.210122   25689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 19:57:10.325009   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 19:57:10.340324   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 19:57:10.360011   25689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 19:57:10.360073   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.372377   25689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 19:57:10.372447   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.383391   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.393837   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.404540   25689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 19:57:10.415811   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.428220   25689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.446649   25689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 19:57:10.457657   25689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 19:57:10.467320   25689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 19:57:10.467375   25689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 19:57:10.483062   25689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 19:57:10.493676   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:57:10.617943   25689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 19:57:10.751365   25689 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 19:57:10.751438   25689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 19:57:10.756529   25689 start.go:562] Will wait 60s for crictl version
	I0708 19:57:10.756589   25689 ssh_runner.go:195] Run: which crictl
	I0708 19:57:10.760562   25689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 19:57:10.804209   25689 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 19:57:10.804285   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:57:10.837994   25689 ssh_runner.go:195] Run: crio --version
	I0708 19:57:10.870751   25689 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 19:57:10.872070   25689 out.go:177]   - env NO_PROXY=192.168.39.33
	I0708 19:57:10.873397   25689 out.go:177]   - env NO_PROXY=192.168.39.33,192.168.39.216
	I0708 19:57:10.874843   25689 main.go:141] libmachine: (ha-511021-m03) Calling .GetIP
	I0708 19:57:10.877528   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:10.877940   25689 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 19:57:10.877971   25689 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 19:57:10.878177   25689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 19:57:10.883258   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:57:10.896197   25689 mustload.go:65] Loading cluster: ha-511021
	I0708 19:57:10.896452   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:57:10.896728   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:57:10.896773   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:57:10.912477   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42533
	I0708 19:57:10.912904   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:57:10.913330   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:57:10.913350   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:57:10.913687   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:57:10.913889   25689 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 19:57:10.915501   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:57:10.915765   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:57:10.915795   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:57:10.932616   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0708 19:57:10.933215   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:57:10.933656   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:57:10.933676   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:57:10.933973   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:57:10.934145   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:57:10.934320   25689 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021 for IP: 192.168.39.70
	I0708 19:57:10.934334   25689 certs.go:194] generating shared ca certs ...
	I0708 19:57:10.934353   25689 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:57:10.934505   25689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 19:57:10.934564   25689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 19:57:10.934579   25689 certs.go:256] generating profile certs ...
	I0708 19:57:10.934675   25689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key
	I0708 19:57:10.934706   25689 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a97293a6
	I0708 19:57:10.934727   25689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a97293a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.33 192.168.39.216 192.168.39.70 192.168.39.254]
	I0708 19:57:11.186337   25689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a97293a6 ...
	I0708 19:57:11.186366   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a97293a6: {Name:mk4a8d0195207cfa7335a3764eebf9c499e522fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:57:11.186539   25689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a97293a6 ...
	I0708 19:57:11.186554   25689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a97293a6: {Name:mkbf5807ae56dc882b5c365ab0ded64ac1264cab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 19:57:11.186648   25689 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a97293a6 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt
	I0708 19:57:11.186792   25689 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a97293a6 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key
	I0708 19:57:11.186948   25689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key
	I0708 19:57:11.186964   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 19:57:11.186983   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 19:57:11.187003   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 19:57:11.187023   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 19:57:11.187041   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 19:57:11.187060   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 19:57:11.187079   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 19:57:11.187096   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 19:57:11.187153   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 19:57:11.187190   25689 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 19:57:11.187203   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 19:57:11.187236   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 19:57:11.187271   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 19:57:11.187301   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 19:57:11.187353   25689 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 19:57:11.187388   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem -> /usr/share/ca-certificates/13141.pem
	I0708 19:57:11.187408   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /usr/share/ca-certificates/131412.pem
	I0708 19:57:11.187426   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:57:11.187480   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:57:11.190361   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:57:11.190768   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:57:11.190794   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:57:11.191018   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:57:11.191208   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:57:11.191318   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:57:11.191435   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:57:11.267789   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0708 19:57:11.276963   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0708 19:57:11.290610   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0708 19:57:11.296404   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0708 19:57:11.306894   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0708 19:57:11.311355   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0708 19:57:11.323594   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0708 19:57:11.328064   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0708 19:57:11.350627   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0708 19:57:11.355118   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0708 19:57:11.365649   25689 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0708 19:57:11.369585   25689 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0708 19:57:11.382513   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 19:57:11.410413   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 19:57:11.439035   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 19:57:11.466164   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 19:57:11.490615   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0708 19:57:11.517748   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 19:57:11.544235   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 19:57:11.570357   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 19:57:11.596155   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 19:57:11.621728   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 19:57:11.648124   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 19:57:11.676095   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0708 19:57:11.694076   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0708 19:57:11.711753   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0708 19:57:11.729257   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0708 19:57:11.747066   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0708 19:57:11.764137   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0708 19:57:11.781356   25689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0708 19:57:11.798294   25689 ssh_runner.go:195] Run: openssl version
	I0708 19:57:11.804430   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 19:57:11.815489   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 19:57:11.820172   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 19:57:11.820236   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 19:57:11.826389   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 19:57:11.838649   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 19:57:11.850789   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 19:57:11.855920   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 19:57:11.855978   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 19:57:11.861937   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 19:57:11.873594   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 19:57:11.885448   25689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:57:11.891160   25689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:57:11.891230   25689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 19:57:11.897885   25689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 19:57:11.909930   25689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 19:57:11.914536   25689 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 19:57:11.914590   25689 kubeadm.go:928] updating node {m03 192.168.39.70 8443 v1.30.2 crio true true} ...
	I0708 19:57:11.914677   25689 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-511021-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 19:57:11.914700   25689 kube-vip.go:115] generating kube-vip config ...
	I0708 19:57:11.914733   25689 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0708 19:57:11.935213   25689 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0708 19:57:11.935286   25689 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0708 19:57:11.935352   25689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 19:57:11.946569   25689 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0708 19:57:11.946619   25689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0708 19:57:11.957244   25689 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0708 19:57:11.957259   25689 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0708 19:57:11.957270   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0708 19:57:11.957304   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 19:57:11.957244   25689 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0708 19:57:11.957360   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0708 19:57:11.957341   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0708 19:57:11.957439   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0708 19:57:11.963872   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0708 19:57:11.963926   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0708 19:57:11.984081   25689 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0708 19:57:11.984142   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0708 19:57:11.984170   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0708 19:57:11.984233   25689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0708 19:57:12.039949   25689 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0708 19:57:12.039992   25689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0708 19:57:12.892198   25689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0708 19:57:12.904526   25689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0708 19:57:12.922036   25689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 19:57:12.939969   25689 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0708 19:57:12.957406   25689 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0708 19:57:12.961650   25689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 19:57:12.975896   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:57:13.105518   25689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:57:13.124811   25689 host.go:66] Checking if "ha-511021" exists ...
	I0708 19:57:13.125236   25689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:57:13.125289   25689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:57:13.140275   25689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0708 19:57:13.140755   25689 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:57:13.141333   25689 main.go:141] libmachine: Using API Version  1
	I0708 19:57:13.141360   25689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:57:13.141704   25689 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:57:13.141913   25689 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 19:57:13.142087   25689 start.go:316] joinCluster: &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:57:13.142222   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0708 19:57:13.142236   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 19:57:13.145111   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:57:13.145521   25689 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 19:57:13.145550   25689 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 19:57:13.145666   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 19:57:13.145857   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 19:57:13.146009   25689 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 19:57:13.146126   25689 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 19:57:13.305793   25689 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:57:13.305843   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zpusg1.50b8ceh2h8t3zmox --discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-511021-m03 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443"
	I0708 19:57:37.116998   25689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zpusg1.50b8ceh2h8t3zmox --discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-511021-m03 --control-plane --apiserver-advertise-address=192.168.39.70 --apiserver-bind-port=8443": (23.811131572s)
	I0708 19:57:37.117034   25689 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0708 19:57:37.730737   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-511021-m03 minikube.k8s.io/updated_at=2024_07_08T19_57_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=ha-511021 minikube.k8s.io/primary=false
	I0708 19:57:37.871695   25689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-511021-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0708 19:57:38.005853   25689 start.go:318] duration metric: took 24.863762753s to joinCluster
	I0708 19:57:38.005940   25689 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 19:57:38.006297   25689 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:57:38.007273   25689 out.go:177] * Verifying Kubernetes components...
	I0708 19:57:38.008505   25689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 19:57:38.302547   25689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 19:57:38.352466   25689 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:57:38.352816   25689 kapi.go:59] client config for ha-511021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key", CAFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfdf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0708 19:57:38.352892   25689 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.33:8443
	I0708 19:57:38.353168   25689 node_ready.go:35] waiting up to 6m0s for node "ha-511021-m03" to be "Ready" ...
	I0708 19:57:38.353256   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:38.353267   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:38.353277   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:38.353284   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:38.359797   25689 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0708 19:57:38.854247   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:38.854271   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:38.854284   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:38.854291   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:38.858471   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:39.353450   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:39.353473   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:39.353485   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:39.353491   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:39.356708   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:39.854112   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:39.854132   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:39.854140   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:39.854145   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:39.857558   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:40.354410   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:40.354436   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:40.354447   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:40.354451   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:40.357844   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:40.358477   25689 node_ready.go:53] node "ha-511021-m03" has status "Ready":"False"
	I0708 19:57:40.854117   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:40.854141   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:40.854152   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:40.854159   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:40.857149   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:41.354299   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:41.354321   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.354332   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.354338   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.363211   25689 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0708 19:57:41.363749   25689 node_ready.go:49] node "ha-511021-m03" has status "Ready":"True"
	I0708 19:57:41.363766   25689 node_ready.go:38] duration metric: took 3.010579366s for node "ha-511021-m03" to be "Ready" ...
	I0708 19:57:41.363773   25689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:57:41.363848   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:57:41.363864   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.363873   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.363879   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.371276   25689 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0708 19:57:41.380480   25689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.380556   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4lzjf
	I0708 19:57:41.380562   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.380569   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.380575   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.384033   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.384956   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:41.384974   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.384982   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.384993   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.388152   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.389129   25689 pod_ready.go:92] pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:41.389149   25689 pod_ready.go:81] duration metric: took 8.642992ms for pod "coredns-7db6d8ff4d-4lzjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.389161   25689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.389236   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w6m9c
	I0708 19:57:41.389248   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.389258   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.389263   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.392877   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.393697   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:41.393715   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.393725   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.393731   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.397534   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.398149   25689 pod_ready.go:92] pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:41.398170   25689 pod_ready.go:81] duration metric: took 9.001626ms for pod "coredns-7db6d8ff4d-w6m9c" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.398182   25689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.398269   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021
	I0708 19:57:41.398281   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.398290   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.398304   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.402297   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.403133   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:41.403154   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.403164   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.403168   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.406676   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.407108   25689 pod_ready.go:92] pod "etcd-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:41.407130   25689 pod_ready.go:81] duration metric: took 8.931313ms for pod "etcd-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.407147   25689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.407202   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m02
	I0708 19:57:41.407209   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.407216   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.407220   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.410135   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:41.410991   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:41.411036   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.411058   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.411075   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.414039   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:41.414551   25689 pod_ready.go:92] pod "etcd-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:41.414565   25689 pod_ready.go:81] duration metric: took 7.409427ms for pod "etcd-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.414572   25689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:41.554912   25689 request.go:629] Waited for 140.281013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:41.554994   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:41.555026   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.555041   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.555050   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.558653   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.755277   25689 request.go:629] Waited for 195.961416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:41.755330   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:41.755337   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.755348   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.755358   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.759119   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:41.955133   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:41.955151   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:41.955160   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:41.955165   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:41.959763   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:42.155305   25689 request.go:629] Waited for 194.363685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:42.155378   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:42.155396   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:42.155408   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:42.155418   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:42.158785   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:42.415634   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:42.415658   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:42.415670   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:42.415675   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:42.419326   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:42.554891   25689 request.go:629] Waited for 134.134036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:42.554946   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:42.554958   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:42.554966   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:42.554971   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:42.558375   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:42.915736   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:42.915763   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:42.915787   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:42.915793   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:42.919479   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:42.954776   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:42.954798   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:42.954809   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:42.954814   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:42.958277   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:43.414764   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:43.414786   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:43.414793   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:43.414796   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:43.418460   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:43.419505   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:43.419521   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:43.419528   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:43.419532   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:43.422207   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:43.422813   25689 pod_ready.go:102] pod "etcd-ha-511021-m03" in "kube-system" namespace has status "Ready":"False"
	I0708 19:57:43.915164   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:43.915191   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:43.915203   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:43.915209   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:43.918664   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:43.919728   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:43.919745   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:43.919751   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:43.919755   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:43.922887   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:44.415794   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:44.415818   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:44.415829   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:44.415833   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:44.419405   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:44.420650   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:44.420669   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:44.420676   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:44.420680   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:44.423531   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:44.914939   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:44.914960   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:44.914968   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:44.914973   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:44.918699   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:44.919571   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:44.919585   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:44.919594   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:44.919600   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:44.922941   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:45.414943   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/etcd-ha-511021-m03
	I0708 19:57:45.414973   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.414982   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.414987   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.419810   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:45.420552   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:45.420569   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.420578   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.420583   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.424046   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:45.424598   25689 pod_ready.go:92] pod "etcd-ha-511021-m03" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:45.424614   25689 pod_ready.go:81] duration metric: took 4.01003595s for pod "etcd-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.424630   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.424697   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021
	I0708 19:57:45.424706   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.424714   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.424716   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.427584   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:45.428318   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:45.428330   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.428336   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.428342   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.431249   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:45.431900   25689 pod_ready.go:92] pod "kube-apiserver-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:45.431920   25689 pod_ready.go:81] duration metric: took 7.282529ms for pod "kube-apiserver-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.431930   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.555284   25689 request.go:629] Waited for 123.294572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m02
	I0708 19:57:45.555358   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m02
	I0708 19:57:45.555365   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.555380   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.555391   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.558718   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:45.755073   25689 request.go:629] Waited for 195.375359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:45.755152   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:45.755164   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.755175   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.755182   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.759746   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:45.760556   25689 pod_ready.go:92] pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:45.760575   25689 pod_ready.go:81] duration metric: took 328.639072ms for pod "kube-apiserver-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.760584   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:45.955219   25689 request.go:629] Waited for 194.56747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:45.955276   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:45.955281   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:45.955289   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:45.955295   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:45.958428   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:46.154506   25689 request.go:629] Waited for 195.29988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.154584   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.154593   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:46.154601   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:46.154604   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:46.158314   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:46.355049   25689 request.go:629] Waited for 94.258126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:46.355101   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:46.355106   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:46.355113   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:46.355119   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:46.358409   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:46.554332   25689 request.go:629] Waited for 195.282095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.554394   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.554402   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:46.554413   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:46.554423   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:46.557464   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:46.760925   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:46.760947   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:46.760957   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:46.760963   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:46.764864   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:46.955142   25689 request.go:629] Waited for 189.344829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.955234   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:46.955245   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:46.955256   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:46.955269   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:46.959166   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:47.260826   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:47.260848   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:47.260856   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:47.260862   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:47.265501   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:47.354423   25689 request.go:629] Waited for 88.21471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:47.354482   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:47.354497   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:47.354505   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:47.354513   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:47.357779   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:47.760952   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:47.760972   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:47.760983   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:47.760990   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:47.765651   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:47.766553   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:47.766571   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:47.766581   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:47.766589   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:47.770481   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:47.770998   25689 pod_ready.go:102] pod "kube-apiserver-ha-511021-m03" in "kube-system" namespace has status "Ready":"False"
	I0708 19:57:48.261756   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:48.261775   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:48.261783   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:48.261787   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:48.265718   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:48.266725   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:48.266741   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:48.266748   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:48.266753   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:48.270180   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:48.760971   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:48.760991   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:48.760999   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:48.761003   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:48.764632   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:48.765264   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:48.765282   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:48.765290   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:48.765294   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:48.768362   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:49.261193   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:49.261218   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.261230   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.261238   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.264469   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:49.265198   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:49.265215   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.265225   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.265233   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.268019   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:49.761403   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-511021-m03
	I0708 19:57:49.761423   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.761432   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.761440   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.764714   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:49.765551   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:49.765570   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.765578   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.765583   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.768464   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:49.769030   25689 pod_ready.go:92] pod "kube-apiserver-ha-511021-m03" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:49.769048   25689 pod_ready.go:81] duration metric: took 4.008458309s for pod "kube-apiserver-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:49.769057   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:49.769104   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021
	I0708 19:57:49.769111   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.769117   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.769120   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.772289   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:49.773063   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:49.773079   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.773089   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.773095   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.776244   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:49.776766   25689 pod_ready.go:92] pod "kube-controller-manager-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:49.776782   25689 pod_ready.go:81] duration metric: took 7.71841ms for pod "kube-controller-manager-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:49.776793   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:49.954505   25689 request.go:629] Waited for 177.649705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m02
	I0708 19:57:49.954594   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m02
	I0708 19:57:49.954603   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:49.954611   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:49.954616   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:49.958008   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:50.154438   25689 request.go:629] Waited for 195.274615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:50.154522   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:50.154532   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:50.154539   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:50.154544   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:50.157669   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:50.158300   25689 pod_ready.go:92] pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:50.158323   25689 pod_ready.go:81] duration metric: took 381.520984ms for pod "kube-controller-manager-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:50.158337   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:50.355341   25689 request.go:629] Waited for 196.911618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:50.355396   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:50.355401   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:50.355408   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:50.355412   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:50.358900   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:50.554299   25689 request.go:629] Waited for 194.290121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:50.554374   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:50.554383   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:50.554391   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:50.554396   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:50.557576   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:50.754809   25689 request.go:629] Waited for 96.282836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:50.754906   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:50.754918   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:50.754930   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:50.754942   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:50.758615   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:50.954346   25689 request.go:629] Waited for 195.002132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:50.954418   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:50.954426   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:50.954433   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:50.954439   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:50.957665   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:51.159292   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:51.159316   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:51.159324   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:51.159327   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:51.162565   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:51.354777   25689 request.go:629] Waited for 191.386264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:51.354825   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:51.354830   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:51.354839   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:51.354847   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:51.358170   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:51.659184   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:51.659210   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:51.659220   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:51.659228   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:51.662583   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:51.755145   25689 request.go:629] Waited for 91.841217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:51.755215   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:51.755227   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:51.755238   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:51.755245   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:51.758802   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:52.158674   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:52.158693   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.158700   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.158705   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.163055   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:52.164359   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:52.164398   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.164411   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.164420   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.166824   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:52.167411   25689 pod_ready.go:102] pod "kube-controller-manager-ha-511021-m03" in "kube-system" namespace has status "Ready":"False"
	I0708 19:57:52.658703   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-511021-m03
	I0708 19:57:52.658729   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.658738   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.658742   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.663232   25689 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 19:57:52.664122   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:52.664141   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.664152   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.664159   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.666495   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:52.667011   25689 pod_ready.go:92] pod "kube-controller-manager-ha-511021-m03" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:52.667033   25689 pod_ready.go:81] duration metric: took 2.508688698s for pod "kube-controller-manager-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:52.667046   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-976tb" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:52.754283   25689 request.go:629] Waited for 87.167609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-976tb
	I0708 19:57:52.754353   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-976tb
	I0708 19:57:52.754365   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.754376   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.754384   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.757538   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:52.954385   25689 request.go:629] Waited for 196.291943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:52.954518   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:52.954558   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:52.954579   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:52.954598   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:52.958408   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:52.958987   25689 pod_ready.go:92] pod "kube-proxy-976tb" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:52.959003   25689 pod_ready.go:81] duration metric: took 291.95006ms for pod "kube-proxy-976tb" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:52.959013   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-scxw5" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:53.154569   25689 request.go:629] Waited for 195.476879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scxw5
	I0708 19:57:53.154629   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scxw5
	I0708 19:57:53.154634   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:53.154641   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:53.154649   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:53.157795   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:53.354830   25689 request.go:629] Waited for 196.38475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:53.354891   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:53.354899   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:53.354909   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:53.354919   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:53.358447   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:53.359336   25689 pod_ready.go:92] pod "kube-proxy-scxw5" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:53.359361   25689 pod_ready.go:81] duration metric: took 400.338804ms for pod "kube-proxy-scxw5" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:53.359373   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmkjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:53.554796   25689 request.go:629] Waited for 195.355417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmkjf
	I0708 19:57:53.554866   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmkjf
	I0708 19:57:53.554875   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:53.554885   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:53.554892   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:53.557837   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:53.755051   25689 request.go:629] Waited for 196.38985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:53.755121   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:53.755128   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:53.755142   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:53.755152   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:53.758094   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:53.758768   25689 pod_ready.go:92] pod "kube-proxy-tmkjf" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:53.758788   25689 pod_ready.go:81] duration metric: took 399.40706ms for pod "kube-proxy-tmkjf" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:53.758799   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:53.954953   25689 request.go:629] Waited for 196.08838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021
	I0708 19:57:53.955016   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021
	I0708 19:57:53.955022   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:53.955029   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:53.955034   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:53.958507   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:54.154992   25689 request.go:629] Waited for 195.336631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:54.155064   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021
	I0708 19:57:54.155070   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.155077   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.155084   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.158017   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:54.158667   25689 pod_ready.go:92] pod "kube-scheduler-ha-511021" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:54.158690   25689 pod_ready.go:81] duration metric: took 399.882729ms for pod "kube-scheduler-ha-511021" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:54.158701   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:54.354770   25689 request.go:629] Waited for 196.005297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m02
	I0708 19:57:54.354848   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m02
	I0708 19:57:54.354856   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.354864   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.354867   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.358191   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:54.555142   25689 request.go:629] Waited for 196.370626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:54.555223   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m02
	I0708 19:57:54.555231   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.555242   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.555248   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.558273   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:54.559024   25689 pod_ready.go:92] pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:54.559045   25689 pod_ready.go:81] duration metric: took 400.33803ms for pod "kube-scheduler-ha-511021-m02" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:54.559055   25689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:54.755323   25689 request.go:629] Waited for 196.208181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m03
	I0708 19:57:54.755393   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-511021-m03
	I0708 19:57:54.755399   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.755406   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.755412   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.758976   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:54.954303   25689 request.go:629] Waited for 194.776882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:54.954363   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes/ha-511021-m03
	I0708 19:57:54.954368   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.954375   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.954381   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.957356   25689 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 19:57:54.957974   25689 pod_ready.go:92] pod "kube-scheduler-ha-511021-m03" in "kube-system" namespace has status "Ready":"True"
	I0708 19:57:54.957994   25689 pod_ready.go:81] duration metric: took 398.931537ms for pod "kube-scheduler-ha-511021-m03" in "kube-system" namespace to be "Ready" ...
	I0708 19:57:54.958005   25689 pod_ready.go:38] duration metric: took 13.594221303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 19:57:54.958022   25689 api_server.go:52] waiting for apiserver process to appear ...
	I0708 19:57:54.958071   25689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 19:57:54.975511   25689 api_server.go:72] duration metric: took 16.969531319s to wait for apiserver process to appear ...
	I0708 19:57:54.975538   25689 api_server.go:88] waiting for apiserver healthz status ...
	I0708 19:57:54.975558   25689 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I0708 19:57:54.979920   25689 api_server.go:279] https://192.168.39.33:8443/healthz returned 200:
	ok
	I0708 19:57:54.979975   25689 round_trippers.go:463] GET https://192.168.39.33:8443/version
	I0708 19:57:54.979981   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:54.979988   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:54.979992   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:54.981252   25689 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 19:57:54.981319   25689 api_server.go:141] control plane version: v1.30.2
	I0708 19:57:54.981337   25689 api_server.go:131] duration metric: took 5.791915ms to wait for apiserver health ...
	I0708 19:57:54.981345   25689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 19:57:55.154749   25689 request.go:629] Waited for 173.325339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:57:55.154819   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:57:55.154826   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:55.154834   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:55.154839   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:55.161344   25689 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0708 19:57:55.169278   25689 system_pods.go:59] 24 kube-system pods found
	I0708 19:57:55.169307   25689 system_pods.go:61] "coredns-7db6d8ff4d-4lzjf" [4bcfc11d-8368-4c95-bf64-5b3d09c4b455] Running
	I0708 19:57:55.169312   25689 system_pods.go:61] "coredns-7db6d8ff4d-w6m9c" [8f45dd66-3096-4878-8b2b-96dcf12bbef2] Running
	I0708 19:57:55.169317   25689 system_pods.go:61] "etcd-ha-511021" [52134689-3a05-4bfa-ae28-2696f8bf0ccb] Running
	I0708 19:57:55.169321   25689 system_pods.go:61] "etcd-ha-511021-m02" [acc2d6d9-6796-453d-a5bb-492c28c5eb94] Running
	I0708 19:57:55.169324   25689 system_pods.go:61] "etcd-ha-511021-m03" [abc1be6f-b619-440b-b6b0-12a99f7f78f1] Running
	I0708 19:57:55.169327   25689 system_pods.go:61] "kindnet-4f49v" [1f0b50ca-73cb-4ffb-9676-09e3a28d7636] Running
	I0708 19:57:55.169330   25689 system_pods.go:61] "kindnet-gn8kn" [68f966e1-e40c-4e6e-8fa4-d3167090fa7c] Running
	I0708 19:57:55.169333   25689 system_pods.go:61] "kindnet-kfpzq" [8400c214-1e12-4869-9d9f-c8d872e29156] Running
	I0708 19:57:55.169336   25689 system_pods.go:61] "kube-apiserver-ha-511021" [e5f0c179-18b9-40ce-9c9c-bfe810f6a422] Running
	I0708 19:57:55.169339   25689 system_pods.go:61] "kube-apiserver-ha-511021-m02" [33e08ded-e75f-4f56-8d52-5447d025d348] Running
	I0708 19:57:55.169342   25689 system_pods.go:61] "kube-apiserver-ha-511021-m03" [ec75847c-55d5-4c98-9fd0-1ee345ff8f77] Running
	I0708 19:57:55.169345   25689 system_pods.go:61] "kube-controller-manager-ha-511021" [136879af-0997-416e-956a-632e940e1da6] Running
	I0708 19:57:55.169348   25689 system_pods.go:61] "kube-controller-manager-ha-511021-m02" [a5d3e392-c4f1-4784-b234-e57a5e9689a9] Running
	I0708 19:57:55.169352   25689 system_pods.go:61] "kube-controller-manager-ha-511021-m03" [9447741b-bf2a-47b5-a3a5-131b27ff0401] Running
	I0708 19:57:55.169354   25689 system_pods.go:61] "kube-proxy-976tb" [97fd998d-9281-40b0-bd6d-cebf8d4bfa02] Running
	I0708 19:57:55.169357   25689 system_pods.go:61] "kube-proxy-scxw5" [6a01e530-81f0-495a-a9a3-576ef3b0de36] Running
	I0708 19:57:55.169360   25689 system_pods.go:61] "kube-proxy-tmkjf" [fb7c00aa-f846-430e-92a2-04cd2fc8a62b] Running
	I0708 19:57:55.169363   25689 system_pods.go:61] "kube-scheduler-ha-511021" [978f9f3f-1bfe-4d9c-9dcf-5a410f101c87] Running
	I0708 19:57:55.169367   25689 system_pods.go:61] "kube-scheduler-ha-511021-m02" [3a4313c1-625d-4ba1-873f-da3ae493f1b5] Running
	I0708 19:57:55.169370   25689 system_pods.go:61] "kube-scheduler-ha-511021-m03" [32ac0620-f107-4073-9a1d-54bae7ce0823] Running
	I0708 19:57:55.169375   25689 system_pods.go:61] "kube-vip-ha-511021" [c2d1c07a-51ae-4264-9fbc-fd7af40ac2d0] Running
	I0708 19:57:55.169378   25689 system_pods.go:61] "kube-vip-ha-511021-m02" [ebc968ae-70c7-45ac-aa9b-ddc9e7142f71] Running
	I0708 19:57:55.169382   25689 system_pods.go:61] "kube-vip-ha-511021-m03" [3d6940a2-b7ef-4b14-a83a-32d61b4f98f4] Running
	I0708 19:57:55.169387   25689 system_pods.go:61] "storage-provisioner" [7d02def4-3af1-4268-a8fa-072c6fd71c83] Running
	I0708 19:57:55.169393   25689 system_pods.go:74] duration metric: took 188.039111ms to wait for pod list to return data ...
	I0708 19:57:55.169402   25689 default_sa.go:34] waiting for default service account to be created ...
	I0708 19:57:55.354813   25689 request.go:629] Waited for 185.34987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/default/serviceaccounts
	I0708 19:57:55.354866   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/default/serviceaccounts
	I0708 19:57:55.354872   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:55.354879   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:55.354884   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:55.358648   25689 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 19:57:55.358782   25689 default_sa.go:45] found service account: "default"
	I0708 19:57:55.358799   25689 default_sa.go:55] duration metric: took 189.390221ms for default service account to be created ...
	I0708 19:57:55.358809   25689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 19:57:55.555161   25689 request.go:629] Waited for 196.272852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:57:55.555249   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0708 19:57:55.555260   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:55.555268   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:55.555272   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:55.563279   25689 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0708 19:57:55.569869   25689 system_pods.go:86] 24 kube-system pods found
	I0708 19:57:55.569898   25689 system_pods.go:89] "coredns-7db6d8ff4d-4lzjf" [4bcfc11d-8368-4c95-bf64-5b3d09c4b455] Running
	I0708 19:57:55.569903   25689 system_pods.go:89] "coredns-7db6d8ff4d-w6m9c" [8f45dd66-3096-4878-8b2b-96dcf12bbef2] Running
	I0708 19:57:55.569908   25689 system_pods.go:89] "etcd-ha-511021" [52134689-3a05-4bfa-ae28-2696f8bf0ccb] Running
	I0708 19:57:55.569913   25689 system_pods.go:89] "etcd-ha-511021-m02" [acc2d6d9-6796-453d-a5bb-492c28c5eb94] Running
	I0708 19:57:55.569917   25689 system_pods.go:89] "etcd-ha-511021-m03" [abc1be6f-b619-440b-b6b0-12a99f7f78f1] Running
	I0708 19:57:55.569921   25689 system_pods.go:89] "kindnet-4f49v" [1f0b50ca-73cb-4ffb-9676-09e3a28d7636] Running
	I0708 19:57:55.569925   25689 system_pods.go:89] "kindnet-gn8kn" [68f966e1-e40c-4e6e-8fa4-d3167090fa7c] Running
	I0708 19:57:55.569933   25689 system_pods.go:89] "kindnet-kfpzq" [8400c214-1e12-4869-9d9f-c8d872e29156] Running
	I0708 19:57:55.569937   25689 system_pods.go:89] "kube-apiserver-ha-511021" [e5f0c179-18b9-40ce-9c9c-bfe810f6a422] Running
	I0708 19:57:55.569940   25689 system_pods.go:89] "kube-apiserver-ha-511021-m02" [33e08ded-e75f-4f56-8d52-5447d025d348] Running
	I0708 19:57:55.569945   25689 system_pods.go:89] "kube-apiserver-ha-511021-m03" [ec75847c-55d5-4c98-9fd0-1ee345ff8f77] Running
	I0708 19:57:55.569952   25689 system_pods.go:89] "kube-controller-manager-ha-511021" [136879af-0997-416e-956a-632e940e1da6] Running
	I0708 19:57:55.569956   25689 system_pods.go:89] "kube-controller-manager-ha-511021-m02" [a5d3e392-c4f1-4784-b234-e57a5e9689a9] Running
	I0708 19:57:55.569962   25689 system_pods.go:89] "kube-controller-manager-ha-511021-m03" [9447741b-bf2a-47b5-a3a5-131b27ff0401] Running
	I0708 19:57:55.569966   25689 system_pods.go:89] "kube-proxy-976tb" [97fd998d-9281-40b0-bd6d-cebf8d4bfa02] Running
	I0708 19:57:55.569970   25689 system_pods.go:89] "kube-proxy-scxw5" [6a01e530-81f0-495a-a9a3-576ef3b0de36] Running
	I0708 19:57:55.569974   25689 system_pods.go:89] "kube-proxy-tmkjf" [fb7c00aa-f846-430e-92a2-04cd2fc8a62b] Running
	I0708 19:57:55.569978   25689 system_pods.go:89] "kube-scheduler-ha-511021" [978f9f3f-1bfe-4d9c-9dcf-5a410f101c87] Running
	I0708 19:57:55.569982   25689 system_pods.go:89] "kube-scheduler-ha-511021-m02" [3a4313c1-625d-4ba1-873f-da3ae493f1b5] Running
	I0708 19:57:55.569987   25689 system_pods.go:89] "kube-scheduler-ha-511021-m03" [32ac0620-f107-4073-9a1d-54bae7ce0823] Running
	I0708 19:57:55.569991   25689 system_pods.go:89] "kube-vip-ha-511021" [c2d1c07a-51ae-4264-9fbc-fd7af40ac2d0] Running
	I0708 19:57:55.569997   25689 system_pods.go:89] "kube-vip-ha-511021-m02" [ebc968ae-70c7-45ac-aa9b-ddc9e7142f71] Running
	I0708 19:57:55.570001   25689 system_pods.go:89] "kube-vip-ha-511021-m03" [3d6940a2-b7ef-4b14-a83a-32d61b4f98f4] Running
	I0708 19:57:55.570005   25689 system_pods.go:89] "storage-provisioner" [7d02def4-3af1-4268-a8fa-072c6fd71c83] Running
	I0708 19:57:55.570011   25689 system_pods.go:126] duration metric: took 211.19314ms to wait for k8s-apps to be running ...
	I0708 19:57:55.570020   25689 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 19:57:55.570079   25689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 19:57:55.586998   25689 system_svc.go:56] duration metric: took 16.970716ms WaitForService to wait for kubelet
	I0708 19:57:55.587021   25689 kubeadm.go:576] duration metric: took 17.581048799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 19:57:55.587041   25689 node_conditions.go:102] verifying NodePressure condition ...
	I0708 19:57:55.754351   25689 request.go:629] Waited for 167.245947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes
	I0708 19:57:55.754438   25689 round_trippers.go:463] GET https://192.168.39.33:8443/api/v1/nodes
	I0708 19:57:55.754450   25689 round_trippers.go:469] Request Headers:
	I0708 19:57:55.754461   25689 round_trippers.go:473]     Accept: application/json, */*
	I0708 19:57:55.754470   25689 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0708 19:57:55.759888   25689 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 19:57:55.761044   25689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:57:55.761063   25689 node_conditions.go:123] node cpu capacity is 2
	I0708 19:57:55.761077   25689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:57:55.761081   25689 node_conditions.go:123] node cpu capacity is 2
	I0708 19:57:55.761084   25689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 19:57:55.761087   25689 node_conditions.go:123] node cpu capacity is 2
	I0708 19:57:55.761091   25689 node_conditions.go:105] duration metric: took 174.046017ms to run NodePressure ...
	I0708 19:57:55.761104   25689 start.go:240] waiting for startup goroutines ...
	I0708 19:57:55.761130   25689 start.go:254] writing updated cluster config ...
	I0708 19:57:55.761437   25689 ssh_runner.go:195] Run: rm -f paused
	I0708 19:57:55.814873   25689 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 19:57:55.816928   25689 out.go:177] * Done! kubectl is now configured to use "ha-511021" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.573415051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b65749c5-d8db-4986-b0c8-3d61e84bd418 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.574706830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58b344d8-30a0-429d-885f-85b4228e067f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.575228893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720468935575180165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58b344d8-30a0-429d-885f-85b4228e067f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.575839778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11327a15-2f37-42f2-a844-cde67f53013c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.575908119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11327a15-2f37-42f2-a844-cde67f53013c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.576133428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720468678300500015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535991010866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535980957678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efdf4f079d33157f227c1d53e6e122777f79d2ad8a8d3b8435680085b1d3a68,PodSandboxId:eaef8d52b039d91daa97e3d7bf2cf97fc0d8ed804cb932c4b85a80bef9d9fc93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1720468534377552262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9,PodSandboxId:f429df990fee63fd9c3c13b64f2baa48c08f6ef862689251b9ec13aaa2eddea3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17204685
32996636063,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720468532672412988,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8ad312a5acddb79be337823087ee2b87d36262359d11cd3661e4a31d3026ec,PodSandboxId:fc46a08650b0c113dca0fc2c08b563545e66b03a33e24cba90956eefb7a018d4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720468514032913032,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becedfb7466881b4e5bb5eeaa93d5ece,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720468512223596473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720468512188740790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67,PodSandboxId:15cc9c5cd6042f512709da858a518c73462ed5c54944466ad74f4ad42cb59e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720468512204616479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586,PodSandboxId:38bebe295e2bf82cd7b16e9b5f818475dd29df00260db1612a9b45d7b67f0879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720468512135109452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11327a15-2f37-42f2-a844-cde67f53013c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.618907204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d091013e-b9c1-4b7b-8d0e-d39a40d85b65 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.619043852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d091013e-b9c1-4b7b-8d0e-d39a40d85b65 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.620289987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e622257-932b-481d-a7b0-1daaa8aa4e87 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.620947420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720468935620915668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e622257-932b-481d-a7b0-1daaa8aa4e87 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.621753261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=663502ab-d01b-481e-b6c2-dd6883f9dc4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.622003724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=663502ab-d01b-481e-b6c2-dd6883f9dc4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.622392532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720468678300500015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535991010866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535980957678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efdf4f079d33157f227c1d53e6e122777f79d2ad8a8d3b8435680085b1d3a68,PodSandboxId:eaef8d52b039d91daa97e3d7bf2cf97fc0d8ed804cb932c4b85a80bef9d9fc93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1720468534377552262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9,PodSandboxId:f429df990fee63fd9c3c13b64f2baa48c08f6ef862689251b9ec13aaa2eddea3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17204685
32996636063,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720468532672412988,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8ad312a5acddb79be337823087ee2b87d36262359d11cd3661e4a31d3026ec,PodSandboxId:fc46a08650b0c113dca0fc2c08b563545e66b03a33e24cba90956eefb7a018d4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720468514032913032,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becedfb7466881b4e5bb5eeaa93d5ece,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720468512223596473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720468512188740790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67,PodSandboxId:15cc9c5cd6042f512709da858a518c73462ed5c54944466ad74f4ad42cb59e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720468512204616479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586,PodSandboxId:38bebe295e2bf82cd7b16e9b5f818475dd29df00260db1612a9b45d7b67f0879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720468512135109452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=663502ab-d01b-481e-b6c2-dd6883f9dc4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.630634541Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c7c7167f-8307-4fe8-8894-35e90d43938f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.631217701Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-w8l78,Uid:0dc81a07-5014-49b4-9c2f-e1806d1705e3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468677055389758,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-08T19:57:56.734732222Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-w6m9c,Uid:8f45dd66-3096-4878-8b2b-96dcf12bbef2,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1720468535744709637,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-08T19:55:33.936152430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4lzjf,Uid:4bcfc11d-8368-4c95-bf64-5b3d09c4b455,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468535736991829,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-07-08T19:55:33.927631619Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eaef8d52b039d91daa97e3d7bf2cf97fc0d8ed804cb932c4b85a80bef9d9fc93,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7d02def4-3af1-4268-a8fa-072c6fd71c83,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468534244651760,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-08T19:55:33.935870180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&PodSandboxMetadata{Name:kube-proxy-tmkjf,Uid:fb7c00aa-f846-430e-92a2-04cd2fc8a62b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468532486306066,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-08T19:55:31.568614861Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f429df990fee63fd9c3c13b64f2baa48c08f6ef862689251b9ec13aaa2eddea3,Metadata:&PodSandboxMetadata{Name:kindnet-4f49v,Uid:1f0b50ca-73cb-4ffb-9676-09e3a28d7636,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468532485103050,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-08T19:55:31.559611901Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&PodSandboxMetadata{Name:etcd-ha-511021,Uid:d92a647e1bb34408bc27cdc3497f9940,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1720468511950568899,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.33:2379,kubernetes.io/config.hash: d92a647e1bb34408bc27cdc3497f9940,kubernetes.io/config.seen: 2024-07-08T19:55:11.470707981Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-511021,Uid:8c3ccf7626b62492304c03ada682e9ee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468511949511412,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b
62492304c03ada682e9ee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8c3ccf7626b62492304c03ada682e9ee,kubernetes.io/config.seen: 2024-07-08T19:55:11.470754700Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc46a08650b0c113dca0fc2c08b563545e66b03a33e24cba90956eefb7a018d4,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-511021,Uid:becedfb7466881b4e5bb5eeaa93d5ece,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468511948478205,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becedfb7466881b4e5bb5eeaa93d5ece,},Annotations:map[string]string{kubernetes.io/config.hash: becedfb7466881b4e5bb5eeaa93d5ece,kubernetes.io/config.seen: 2024-07-08T19:55:11.470755475Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:15cc9c5cd6042f512709da858a518c73462ed5c54944466ad74f4ad42cb59e35,Metadata:&PodSandboxMetadata{Name:kube-co
ntroller-manager-ha-511021,Uid:a571722211ffd00c8b1df39a68520333,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468511948000627,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a571722211ffd00c8b1df39a68520333,kubernetes.io/config.seen: 2024-07-08T19:55:11.470753563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:38bebe295e2bf82cd7b16e9b5f818475dd29df00260db1612a9b45d7b67f0879,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-511021,Uid:42b9f382d32fb78346f5160840013b51,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1720468511930189876,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.33:8443,kubernetes.io/config.hash: 42b9f382d32fb78346f5160840013b51,kubernetes.io/config.seen: 2024-07-08T19:55:11.470751755Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c7c7167f-8307-4fe8-8894-35e90d43938f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.632323180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3486d839-90a7-4000-a9d3-23bc854c8ba9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.632400243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3486d839-90a7-4000-a9d3-23bc854c8ba9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.632751600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720468678300500015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535991010866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535980957678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efdf4f079d33157f227c1d53e6e122777f79d2ad8a8d3b8435680085b1d3a68,PodSandboxId:eaef8d52b039d91daa97e3d7bf2cf97fc0d8ed804cb932c4b85a80bef9d9fc93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1720468534377552262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9,PodSandboxId:f429df990fee63fd9c3c13b64f2baa48c08f6ef862689251b9ec13aaa2eddea3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17204685
32996636063,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720468532672412988,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8ad312a5acddb79be337823087ee2b87d36262359d11cd3661e4a31d3026ec,PodSandboxId:fc46a08650b0c113dca0fc2c08b563545e66b03a33e24cba90956eefb7a018d4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720468514032913032,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becedfb7466881b4e5bb5eeaa93d5ece,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720468512223596473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720468512188740790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67,PodSandboxId:15cc9c5cd6042f512709da858a518c73462ed5c54944466ad74f4ad42cb59e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720468512204616479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586,PodSandboxId:38bebe295e2bf82cd7b16e9b5f818475dd29df00260db1612a9b45d7b67f0879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720468512135109452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3486d839-90a7-4000-a9d3-23bc854c8ba9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.666062526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=daed97bf-01e3-48bb-a394-518337580b8a name=/runtime.v1.RuntimeService/Version
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.666167417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=daed97bf-01e3-48bb-a394-518337580b8a name=/runtime.v1.RuntimeService/Version
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.670130844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fd7ff52-811c-498a-bb46-416dcc92aef1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.673669462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720468935673638628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fd7ff52-811c-498a-bb46-416dcc92aef1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.674410182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=837a8b42-14a7-46d9-a571-788e0d499e2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.674484775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=837a8b42-14a7-46d9-a571-788e0d499e2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:02:15 ha-511021 crio[678]: time="2024-07-08 20:02:15.674708516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720468678300500015,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535991010866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720468535980957678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0efdf4f079d33157f227c1d53e6e122777f79d2ad8a8d3b8435680085b1d3a68,PodSandboxId:eaef8d52b039d91daa97e3d7bf2cf97fc0d8ed804cb932c4b85a80bef9d9fc93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1720468534377552262,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9,PodSandboxId:f429df990fee63fd9c3c13b64f2baa48c08f6ef862689251b9ec13aaa2eddea3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17204685
32996636063,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720468532672412988,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8ad312a5acddb79be337823087ee2b87d36262359d11cd3661e4a31d3026ec,PodSandboxId:fc46a08650b0c113dca0fc2c08b563545e66b03a33e24cba90956eefb7a018d4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720468514032913032,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becedfb7466881b4e5bb5eeaa93d5ece,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720468512223596473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720468512188740790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67,PodSandboxId:15cc9c5cd6042f512709da858a518c73462ed5c54944466ad74f4ad42cb59e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720468512204616479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586,PodSandboxId:38bebe295e2bf82cd7b16e9b5f818475dd29df00260db1612a9b45d7b67f0879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720468512135109452,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=837a8b42-14a7-46d9-a571-788e0d499e2e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f1ad4f76c216a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   b1cbe60f17e1a       busybox-fc5497c4f-w8l78
	6b083875d2679       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   a361ba0082084       coredns-7db6d8ff4d-w6m9c
	499dc5b41a3d6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   3765b2ad464be       coredns-7db6d8ff4d-4lzjf
	0efdf4f079d33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   eaef8d52b039d       storage-provisioner
	ef250a5d2c670       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      6 minutes ago       Running             kindnet-cni               0                   f429df990fee6       kindnet-4f49v
	67153dce61aaa       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      6 minutes ago       Running             kube-proxy                0                   8cba18d6a0140       kube-proxy-tmkjf
	dd8ad312a5acd       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   fc46a08650b0c       kube-vip-ha-511021
	08189f5ac12ce       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   2e4a76498c1cf       etcd-ha-511021
	0ed1c59e04eb8       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      7 minutes ago       Running             kube-controller-manager   0                   15cc9c5cd6042       kube-controller-manager-ha-511021
	019d794c36af8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      7 minutes ago       Running             kube-scheduler            0                   bc2b7b56fb60f       kube-scheduler-ha-511021
	e4326cf8a34b6       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      7 minutes ago       Running             kube-apiserver            0                   38bebe295e2bf       kube-apiserver-ha-511021
	
	
	==> coredns [499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7] <==
	[INFO] 10.244.0.4:59111 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000090789s
	[INFO] 10.244.0.4:36217 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001689218s
	[INFO] 10.244.2.2:60648 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000081401s
	[INFO] 10.244.1.2:34341 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003785945s
	[INFO] 10.244.1.2:60350 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225614s
	[INFO] 10.244.1.2:48742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218522s
	[INFO] 10.244.1.2:60141 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145244s
	[INFO] 10.244.0.4:58500 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001476805s
	[INFO] 10.244.0.4:53415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090934s
	[INFO] 10.244.0.4:60685 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159681s
	[INFO] 10.244.2.2:35117 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216541s
	[INFO] 10.244.2.2:56929 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000209242s
	[INFO] 10.244.2.2:57601 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099474s
	[INFO] 10.244.1.2:51767 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189518s
	[INFO] 10.244.1.2:53177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013929s
	[INFO] 10.244.0.4:44104 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095184s
	[INFO] 10.244.2.2:51012 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106956s
	[INFO] 10.244.2.2:37460 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124276s
	[INFO] 10.244.2.2:46238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124359s
	[INFO] 10.244.1.2:56514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153739s
	[INFO] 10.244.1.2:45870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000362406s
	[INFO] 10.244.0.4:54901 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101371s
	[INFO] 10.244.0.4:38430 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128119s
	[INFO] 10.244.0.4:59433 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112582s
	[INFO] 10.244.2.2:50495 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000089543s
	
	
	==> coredns [6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa] <==
	[INFO] 10.244.1.2:51626 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156089s
	[INFO] 10.244.1.2:56377 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010828331s
	[INFO] 10.244.1.2:38901 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119209s
	[INFO] 10.244.0.4:40100 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000072232s
	[INFO] 10.244.0.4:51493 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001936632s
	[INFO] 10.244.0.4:45493 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011856s
	[INFO] 10.244.0.4:43450 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049467s
	[INFO] 10.244.0.4:42950 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177837s
	[INFO] 10.244.2.2:44783 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001772539s
	[INFO] 10.244.2.2:60536 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011424s
	[INFO] 10.244.2.2:56160 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090498s
	[INFO] 10.244.2.2:60942 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001479529s
	[INFO] 10.244.2.2:59066 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078767s
	[INFO] 10.244.1.2:33094 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298986s
	[INFO] 10.244.1.2:41194 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092808s
	[INFO] 10.244.0.4:44172 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168392s
	[INFO] 10.244.0.4:47644 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085824s
	[INFO] 10.244.0.4:45776 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131918s
	[INFO] 10.244.2.2:53642 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164258s
	[INFO] 10.244.1.2:32877 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000282103s
	[INFO] 10.244.1.2:59022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013901s
	[INFO] 10.244.0.4:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129873s
	[INFO] 10.244.2.2:48648 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161626s
	[INFO] 10.244.2.2:59172 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147702s
	[INFO] 10.244.2.2:45542 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156821s
	
	
	==> describe nodes <==
	Name:               ha-511021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T19_55_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:55:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:02:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:58:22 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:58:22 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:58:22 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:58:22 +0000   Mon, 08 Jul 2024 19:55:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.33
	  Hostname:    ha-511021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b87893acdd9a476ea34795541f3789df
	  System UUID:                b87893ac-dd9a-476e-a347-95541f3789df
	  Boot ID:                    17494c0f-24c9-4604-bfc5-8f8d6538a4f6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w8l78              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 coredns-7db6d8ff4d-4lzjf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m44s
	  kube-system                 coredns-7db6d8ff4d-w6m9c             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m44s
	  kube-system                 etcd-ha-511021                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m58s
	  kube-system                 kindnet-4f49v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m45s
	  kube-system                 kube-apiserver-ha-511021             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                 kube-controller-manager-ha-511021    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                 kube-proxy-tmkjf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	  kube-system                 kube-scheduler-ha-511021             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                 kube-vip-ha-511021                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m43s  kube-proxy       
	  Normal  Starting                 6m58s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m58s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m58s  kubelet          Node ha-511021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m58s  kubelet          Node ha-511021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m58s  kubelet          Node ha-511021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m45s  node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal  NodeReady                6m43s  kubelet          Node ha-511021 status is now: NodeReady
	  Normal  RegisteredNode           5m36s  node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal  RegisteredNode           4m24s  node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	
	
	Name:               ha-511021-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_56_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:56:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:58:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Jul 2024 19:58:23 +0000   Mon, 08 Jul 2024 19:59:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Jul 2024 19:58:23 +0000   Mon, 08 Jul 2024 19:59:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Jul 2024 19:58:23 +0000   Mon, 08 Jul 2024 19:59:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Jul 2024 19:58:23 +0000   Mon, 08 Jul 2024 19:59:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-511021-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 09ff24d6fb9848b0b108f4ecb99eedc3
	  System UUID:                09ff24d6-fb98-48b0-b108-f4ecb99eedc3
	  Boot ID:                    44b68e74-b329-4b25-97a6-3396a30d544a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xjfx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 etcd-ha-511021-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m53s
	  kube-system                 kindnet-gn8kn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m55s
	  kube-system                 kube-apiserver-ha-511021-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kube-controller-manager-ha-511021-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kube-proxy-976tb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-scheduler-ha-511021-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kube-vip-ha-511021-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m50s                  kube-proxy       
	  Normal  RegisteredNode           5m55s                  node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m55s (x8 over 5m55s)  kubelet          Node ha-511021-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x8 over 5m55s)  kubelet          Node ha-511021-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x7 over 5m55s)  kubelet          Node ha-511021-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m36s                  node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  NodeNotReady             2m40s                  node-controller  Node ha-511021-m02 status is now: NodeNotReady
	
	
	Name:               ha-511021-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_57_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:57:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:02:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:58:04 +0000   Mon, 08 Jul 2024 19:57:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:58:04 +0000   Mon, 08 Jul 2024 19:57:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:58:04 +0000   Mon, 08 Jul 2024 19:57:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:58:04 +0000   Mon, 08 Jul 2024 19:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-511021-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a1265a3cabd4e6aae62914cc287dffa
	  System UUID:                8a1265a3-cabd-4e6a-ae62-914cc287dffa
	  Boot ID:                    6affb020-1648-4456-b4d6-301592f6f240
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-x9p75                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 etcd-ha-511021-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m41s
	  kube-system                 kindnet-kfpzq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m43s
	  kube-system                 kube-apiserver-ha-511021-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-controller-manager-ha-511021-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-proxy-scxw5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-scheduler-ha-511021-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-vip-ha-511021-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m43s (x8 over 4m43s)  kubelet          Node ha-511021-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s (x8 over 4m43s)  kubelet          Node ha-511021-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s (x7 over 4m43s)  kubelet          Node ha-511021-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m41s                  node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	
	
	Name:               ha-511021-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_58_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:58:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:02:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:59:04 +0000   Mon, 08 Jul 2024 19:58:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:59:04 +0000   Mon, 08 Jul 2024 19:58:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:59:04 +0000   Mon, 08 Jul 2024 19:58:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:59:04 +0000   Mon, 08 Jul 2024 19:58:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    ha-511021-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef479bd2efc3487eb39d936b4399c97b
	  System UUID:                ef479bd2-efc3-487e-b39d-936b4399c97b
	  Boot ID:                    9e902555-dfb9-4fff-947a-24e55fd76688
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bbbp6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m42s
	  kube-system                 kube-proxy-7mb58    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m36s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m43s (x2 over 3m43s)  kubelet          Node ha-511021-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s (x2 over 3m43s)  kubelet          Node ha-511021-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s (x2 over 3m43s)  kubelet          Node ha-511021-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m41s                  node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal  RegisteredNode           3m40s                  node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node ha-511021-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul 8 19:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050477] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040158] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.560798] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.360481] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.523061] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul 8 19:55] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.119364] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.209787] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.142097] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.285009] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.308511] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.058301] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.483782] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.535916] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.022132] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.103961] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.289495] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.234845] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e] <==
	{"level":"warn","ts":"2024-07-08T20:02:15.933078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:15.969052Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:15.980939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:15.986087Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:15.999234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.006491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.01812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.024247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.028627Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.033617Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.03638Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.041639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.043627Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.050355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.056605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.060402Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.067716Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.073282Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.078775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.082612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.085435Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.090387Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.095998Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.102016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:02:16.13392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:02:16 up 7 min,  0 users,  load average: 0.06, 0.17, 0.10
	Linux ha-511021 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9] <==
	I0708 20:01:44.371574       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:01:54.380416       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:01:54.380611       1 main.go:227] handling current node
	I0708 20:01:54.380650       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:01:54.380670       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:01:54.380870       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0708 20:01:54.380914       1 main.go:250] Node ha-511021-m03 has CIDR [10.244.2.0/24] 
	I0708 20:01:54.380994       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:01:54.381013       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:02:04.392890       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:02:04.392994       1 main.go:227] handling current node
	I0708 20:02:04.393019       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:02:04.393037       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:02:04.393160       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0708 20:02:04.393180       1 main.go:250] Node ha-511021-m03 has CIDR [10.244.2.0/24] 
	I0708 20:02:04.393235       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:02:04.393252       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:02:14.410396       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:02:14.410521       1 main.go:227] handling current node
	I0708 20:02:14.410559       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:02:14.410591       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:02:14.410736       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0708 20:02:14.410775       1 main.go:250] Node ha-511021-m03 has CIDR [10.244.2.0/24] 
	I0708 20:02:14.410951       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:02:14.410998       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586] <==
	W0708 19:55:17.747429       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.33]
	I0708 19:55:17.748563       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 19:55:17.753732       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 19:55:17.928724       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 19:55:18.874618       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 19:55:18.900685       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0708 19:55:18.919491       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 19:55:31.486461       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0708 19:55:32.033454       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0708 19:57:59.835641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45684: use of closed network connection
	E0708 19:58:00.036890       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45702: use of closed network connection
	E0708 19:58:00.227515       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45718: use of closed network connection
	E0708 19:58:00.442844       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45738: use of closed network connection
	E0708 19:58:00.628129       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45752: use of closed network connection
	E0708 19:58:00.809482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45762: use of closed network connection
	E0708 19:58:01.001441       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45780: use of closed network connection
	E0708 19:58:01.193852       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45796: use of closed network connection
	E0708 19:58:01.376713       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45806: use of closed network connection
	E0708 19:58:01.666045       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45838: use of closed network connection
	E0708 19:58:01.847636       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45852: use of closed network connection
	E0708 19:58:02.039611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45856: use of closed network connection
	E0708 19:58:02.221519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45884: use of closed network connection
	E0708 19:58:02.420192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45900: use of closed network connection
	E0708 19:58:02.595747       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45906: use of closed network connection
	W0708 19:59:17.760184       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.33 192.168.39.70]
	
	
	==> kube-controller-manager [0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67] <==
	I0708 19:57:33.883717       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-511021-m03" podCIDRs=["10.244.2.0/24"]
	I0708 19:57:36.544686       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-511021-m03"
	I0708 19:57:56.741579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.759013ms"
	I0708 19:57:56.767073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.366792ms"
	I0708 19:57:56.849489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.344716ms"
	I0708 19:57:57.065957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="216.268114ms"
	I0708 19:57:57.153949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.932308ms"
	I0708 19:57:57.249947       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.929148ms"
	E0708 19:57:57.250162       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0708 19:57:57.311962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.620171ms"
	I0708 19:57:57.312087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.723µs"
	I0708 19:57:58.652325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.251102ms"
	I0708 19:57:58.652594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.27µs"
	I0708 19:57:59.105148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.860931ms"
	I0708 19:57:59.105270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.396µs"
	I0708 19:57:59.363448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.784415ms"
	I0708 19:57:59.363654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.163µs"
	E0708 19:58:33.825591       1 certificate_controller.go:146] Sync csr-v8ghx failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-v8ghx": the object has been modified; please apply your changes to the latest version and try again
	I0708 19:58:33.945736       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-511021-m04\" does not exist"
	I0708 19:58:34.128845       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-511021-m04" podCIDRs=["10.244.3.0/24"]
	I0708 19:58:36.607656       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-511021-m04"
	I0708 19:58:42.501604       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-511021-m04"
	I0708 19:59:36.631244       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-511021-m04"
	I0708 19:59:36.776634       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.391327ms"
	I0708 19:59:36.776876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.037µs"
	
	
	==> kube-proxy [67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19] <==
	I0708 19:55:32.852876       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:55:32.874081       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.33"]
	I0708 19:55:32.914145       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:55:32.914257       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:55:32.914291       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:55:32.917559       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:55:32.917764       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:55:32.918008       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:55:32.920064       1 config.go:192] "Starting service config controller"
	I0708 19:55:32.920133       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:55:32.920176       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:55:32.920192       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:55:32.920779       1 config.go:319] "Starting node config controller"
	I0708 19:55:32.920927       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:55:33.020536       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:55:33.020597       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:55:33.021000       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9] <==
	E0708 19:55:17.153223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 19:55:17.257366       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 19:55:17.257414       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 19:55:17.314276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 19:55:17.314328       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0708 19:55:19.466683       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0708 19:57:33.939776       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kfpzq\": pod kindnet-kfpzq is already assigned to node \"ha-511021-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-kfpzq" node="ha-511021-m03"
	E0708 19:57:33.940071       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8400c214-1e12-4869-9d9f-c8d872e29156(kube-system/kindnet-kfpzq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kfpzq"
	E0708 19:57:33.940108       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kfpzq\": pod kindnet-kfpzq is already assigned to node \"ha-511021-m03\"" pod="kube-system/kindnet-kfpzq"
	I0708 19:57:33.940158       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kfpzq" node="ha-511021-m03"
	E0708 19:57:33.956776       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-scxw5\": pod kube-proxy-scxw5 is already assigned to node \"ha-511021-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-scxw5" node="ha-511021-m03"
	E0708 19:57:33.956917       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6a01e530-81f0-495a-a9a3-576ef3b0de36(kube-system/kube-proxy-scxw5) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-scxw5"
	E0708 19:57:33.956939       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-scxw5\": pod kube-proxy-scxw5 is already assigned to node \"ha-511021-m03\"" pod="kube-system/kube-proxy-scxw5"
	I0708 19:57:33.957127       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-scxw5" node="ha-511021-m03"
	I0708 19:57:56.702453       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="993a3e9e-2fe3-41de-9bc1-b98386749da9" pod="default/busybox-fc5497c4f-x9p75" assumedNode="ha-511021-m03" currentNode="ha-511021-m02"
	E0708 19:57:56.713330       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-x9p75\": pod busybox-fc5497c4f-x9p75 is already assigned to node \"ha-511021-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-x9p75" node="ha-511021-m02"
	E0708 19:57:56.713407       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 993a3e9e-2fe3-41de-9bc1-b98386749da9(default/busybox-fc5497c4f-x9p75) was assumed on ha-511021-m02 but assigned to ha-511021-m03" pod="default/busybox-fc5497c4f-x9p75"
	E0708 19:57:56.713625       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-x9p75\": pod busybox-fc5497c4f-x9p75 is already assigned to node \"ha-511021-m03\"" pod="default/busybox-fc5497c4f-x9p75"
	I0708 19:57:56.713692       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-x9p75" node="ha-511021-m03"
	E0708 19:57:56.750725       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-w8l78\": pod busybox-fc5497c4f-w8l78 is already assigned to node \"ha-511021\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-w8l78" node="ha-511021"
	E0708 19:57:56.750928       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0dc81a07-5014-49b4-9c2f-e1806d1705e3(default/busybox-fc5497c4f-w8l78) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-w8l78"
	E0708 19:57:56.750955       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-w8l78\": pod busybox-fc5497c4f-w8l78 is already assigned to node \"ha-511021\"" pod="default/busybox-fc5497c4f-w8l78"
	I0708 19:57:56.750975       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-w8l78" node="ha-511021"
	E0708 19:58:34.168305       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7mb58\": pod kube-proxy-7mb58 is already assigned to node \"ha-511021-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7mb58" node="ha-511021-m04"
	E0708 19:58:34.168419       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7mb58\": pod kube-proxy-7mb58 is already assigned to node \"ha-511021-m04\"" pod="kube-system/kube-proxy-7mb58"
	
	
	==> kubelet <==
	Jul 08 19:57:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:57:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:57:56 ha-511021 kubelet[1369]: I0708 19:57:56.735361    1369 topology_manager.go:215] "Topology Admit Handler" podUID="0dc81a07-5014-49b4-9c2f-e1806d1705e3" podNamespace="default" podName="busybox-fc5497c4f-w8l78"
	Jul 08 19:57:56 ha-511021 kubelet[1369]: I0708 19:57:56.796637    1369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c25b9\" (UniqueName: \"kubernetes.io/projected/0dc81a07-5014-49b4-9c2f-e1806d1705e3-kube-api-access-c25b9\") pod \"busybox-fc5497c4f-w8l78\" (UID: \"0dc81a07-5014-49b4-9c2f-e1806d1705e3\") " pod="default/busybox-fc5497c4f-w8l78"
	Jul 08 19:57:58 ha-511021 kubelet[1369]: I0708 19:57:58.639143    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-w8l78" podStartSLOduration=1.754375308 podStartE2EDuration="2.639072715s" podCreationTimestamp="2024-07-08 19:57:56 +0000 UTC" firstStartedPulling="2024-07-08 19:57:57.403273688 +0000 UTC m=+158.729928017" lastFinishedPulling="2024-07-08 19:57:58.287971096 +0000 UTC m=+159.614625424" observedRunningTime="2024-07-08 19:57:58.638464065 +0000 UTC m=+159.965118413" watchObservedRunningTime="2024-07-08 19:57:58.639072715 +0000 UTC m=+159.965727066"
	Jul 08 19:58:18 ha-511021 kubelet[1369]: E0708 19:58:18.947235    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:58:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:58:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:58:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:58:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 19:59:18 ha-511021 kubelet[1369]: E0708 19:59:18.960448    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:59:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:59:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:59:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:59:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 20:00:18 ha-511021 kubelet[1369]: E0708 20:00:18.946966    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:00:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:00:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:00:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:00:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 20:01:18 ha-511021 kubelet[1369]: E0708 20:01:18.948413    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:01:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:01:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:01:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:01:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-511021 -n ha-511021
helpers_test.go:261: (dbg) Run:  kubectl --context ha-511021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (402.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-511021 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-511021 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-511021 -v=7 --alsologtostderr: exit status 82 (2m1.827995151s)

                                                
                                                
-- stdout --
	* Stopping node "ha-511021-m04"  ...
	* Stopping node "ha-511021-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:02:17.607839   31357 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:02:17.608093   31357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:02:17.608103   31357 out.go:304] Setting ErrFile to fd 2...
	I0708 20:02:17.608109   31357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:02:17.608314   31357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:02:17.608569   31357 out.go:298] Setting JSON to false
	I0708 20:02:17.608680   31357 mustload.go:65] Loading cluster: ha-511021
	I0708 20:02:17.609069   31357 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:02:17.609172   31357 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 20:02:17.609368   31357 mustload.go:65] Loading cluster: ha-511021
	I0708 20:02:17.609523   31357 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:02:17.609557   31357 stop.go:39] StopHost: ha-511021-m04
	I0708 20:02:17.609948   31357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:17.610007   31357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:17.624850   31357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45353
	I0708 20:02:17.625339   31357 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:17.625970   31357 main.go:141] libmachine: Using API Version  1
	I0708 20:02:17.625995   31357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:17.626359   31357 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:17.629307   31357 out.go:177] * Stopping node "ha-511021-m04"  ...
	I0708 20:02:17.630701   31357 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0708 20:02:17.630736   31357 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:02:17.630979   31357 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0708 20:02:17.631006   31357 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:02:17.633727   31357 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:02:17.634172   31357 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:58:17 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:02:17.634205   31357 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:02:17.634272   31357 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:02:17.634461   31357 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:02:17.634601   31357 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:02:17.634743   31357 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	I0708 20:02:17.719044   31357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0708 20:02:17.773054   31357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0708 20:02:17.827315   31357 main.go:141] libmachine: Stopping "ha-511021-m04"...
	I0708 20:02:17.827339   31357 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:02:17.828880   31357 main.go:141] libmachine: (ha-511021-m04) Calling .Stop
	I0708 20:02:17.832194   31357 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 0/120
	I0708 20:02:18.965202   31357 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:02:18.966619   31357 main.go:141] libmachine: Machine "ha-511021-m04" was stopped.
	I0708 20:02:18.966636   31357 stop.go:75] duration metric: took 1.335938784s to stop
	I0708 20:02:18.966676   31357 stop.go:39] StopHost: ha-511021-m03
	I0708 20:02:18.966974   31357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:02:18.967014   31357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:02:18.981489   31357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39025
	I0708 20:02:18.981892   31357 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:02:18.982367   31357 main.go:141] libmachine: Using API Version  1
	I0708 20:02:18.982388   31357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:02:18.982734   31357 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:02:18.984747   31357 out.go:177] * Stopping node "ha-511021-m03"  ...
	I0708 20:02:18.985932   31357 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0708 20:02:18.985953   31357 main.go:141] libmachine: (ha-511021-m03) Calling .DriverName
	I0708 20:02:18.986152   31357 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0708 20:02:18.986170   31357 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHHostname
	I0708 20:02:18.989102   31357 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:02:18.989489   31357 main.go:141] libmachine: (ha-511021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:80:5b", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:56:59 +0000 UTC Type:0 Mac:52:54:00:a7:80:5b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-511021-m03 Clientid:01:52:54:00:a7:80:5b}
	I0708 20:02:18.989532   31357 main.go:141] libmachine: (ha-511021-m03) DBG | domain ha-511021-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:a7:80:5b in network mk-ha-511021
	I0708 20:02:18.989633   31357 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHPort
	I0708 20:02:18.989784   31357 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHKeyPath
	I0708 20:02:18.989942   31357 main.go:141] libmachine: (ha-511021-m03) Calling .GetSSHUsername
	I0708 20:02:18.990061   31357 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m03/id_rsa Username:docker}
	I0708 20:02:19.079167   31357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0708 20:02:19.132997   31357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0708 20:02:19.189288   31357 main.go:141] libmachine: Stopping "ha-511021-m03"...
	I0708 20:02:19.189314   31357 main.go:141] libmachine: (ha-511021-m03) Calling .GetState
	I0708 20:02:19.190749   31357 main.go:141] libmachine: (ha-511021-m03) Calling .Stop
	I0708 20:02:19.194476   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 0/120
	I0708 20:02:20.195775   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 1/120
	I0708 20:02:21.196883   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 2/120
	I0708 20:02:22.198218   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 3/120
	I0708 20:02:23.200131   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 4/120
	I0708 20:02:24.201965   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 5/120
	I0708 20:02:25.203328   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 6/120
	I0708 20:02:26.204784   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 7/120
	I0708 20:02:27.206337   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 8/120
	I0708 20:02:28.208048   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 9/120
	I0708 20:02:29.209928   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 10/120
	I0708 20:02:30.212385   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 11/120
	I0708 20:02:31.213747   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 12/120
	I0708 20:02:32.215614   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 13/120
	I0708 20:02:33.217165   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 14/120
	I0708 20:02:34.219050   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 15/120
	I0708 20:02:35.220536   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 16/120
	I0708 20:02:36.221976   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 17/120
	I0708 20:02:37.223485   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 18/120
	I0708 20:02:38.224938   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 19/120
	I0708 20:02:39.227084   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 20/120
	I0708 20:02:40.228868   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 21/120
	I0708 20:02:41.230451   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 22/120
	I0708 20:02:42.232054   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 23/120
	I0708 20:02:43.233236   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 24/120
	I0708 20:02:44.234947   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 25/120
	I0708 20:02:45.236628   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 26/120
	I0708 20:02:46.238170   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 27/120
	I0708 20:02:47.239795   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 28/120
	I0708 20:02:48.241216   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 29/120
	I0708 20:02:49.242897   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 30/120
	I0708 20:02:50.244595   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 31/120
	I0708 20:02:51.246026   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 32/120
	I0708 20:02:52.247365   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 33/120
	I0708 20:02:53.248578   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 34/120
	I0708 20:02:54.250336   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 35/120
	I0708 20:02:55.251986   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 36/120
	I0708 20:02:56.253469   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 37/120
	I0708 20:02:57.254755   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 38/120
	I0708 20:02:58.256241   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 39/120
	I0708 20:02:59.258028   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 40/120
	I0708 20:03:00.259254   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 41/120
	I0708 20:03:01.261445   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 42/120
	I0708 20:03:02.263056   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 43/120
	I0708 20:03:03.264625   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 44/120
	I0708 20:03:04.266137   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 45/120
	I0708 20:03:05.268231   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 46/120
	I0708 20:03:06.269601   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 47/120
	I0708 20:03:07.271003   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 48/120
	I0708 20:03:08.272464   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 49/120
	I0708 20:03:09.274398   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 50/120
	I0708 20:03:10.275934   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 51/120
	I0708 20:03:11.277234   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 52/120
	I0708 20:03:12.278590   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 53/120
	I0708 20:03:13.279967   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 54/120
	I0708 20:03:14.281773   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 55/120
	I0708 20:03:15.283225   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 56/120
	I0708 20:03:16.284606   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 57/120
	I0708 20:03:17.285938   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 58/120
	I0708 20:03:18.287284   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 59/120
	I0708 20:03:19.289326   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 60/120
	I0708 20:03:20.290734   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 61/120
	I0708 20:03:21.292096   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 62/120
	I0708 20:03:22.293870   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 63/120
	I0708 20:03:23.295126   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 64/120
	I0708 20:03:24.296804   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 65/120
	I0708 20:03:25.298093   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 66/120
	I0708 20:03:26.299467   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 67/120
	I0708 20:03:27.300812   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 68/120
	I0708 20:03:28.302329   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 69/120
	I0708 20:03:29.304157   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 70/120
	I0708 20:03:30.305728   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 71/120
	I0708 20:03:31.307417   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 72/120
	I0708 20:03:32.308824   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 73/120
	I0708 20:03:33.310449   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 74/120
	I0708 20:03:34.312315   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 75/120
	I0708 20:03:35.313874   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 76/120
	I0708 20:03:36.315188   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 77/120
	I0708 20:03:37.316691   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 78/120
	I0708 20:03:38.318260   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 79/120
	I0708 20:03:39.319683   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 80/120
	I0708 20:03:40.321211   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 81/120
	I0708 20:03:41.322495   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 82/120
	I0708 20:03:42.323849   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 83/120
	I0708 20:03:43.325252   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 84/120
	I0708 20:03:44.327185   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 85/120
	I0708 20:03:45.328675   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 86/120
	I0708 20:03:46.330350   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 87/120
	I0708 20:03:47.331784   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 88/120
	I0708 20:03:48.333241   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 89/120
	I0708 20:03:49.335161   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 90/120
	I0708 20:03:50.336614   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 91/120
	I0708 20:03:51.338027   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 92/120
	I0708 20:03:52.339878   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 93/120
	I0708 20:03:53.342052   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 94/120
	I0708 20:03:54.343621   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 95/120
	I0708 20:03:55.344980   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 96/120
	I0708 20:03:56.346266   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 97/120
	I0708 20:03:57.347648   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 98/120
	I0708 20:03:58.348949   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 99/120
	I0708 20:03:59.350629   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 100/120
	I0708 20:04:00.351976   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 101/120
	I0708 20:04:01.353451   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 102/120
	I0708 20:04:02.354750   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 103/120
	I0708 20:04:03.356314   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 104/120
	I0708 20:04:04.357884   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 105/120
	I0708 20:04:05.359161   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 106/120
	I0708 20:04:06.360609   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 107/120
	I0708 20:04:07.361826   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 108/120
	I0708 20:04:08.363269   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 109/120
	I0708 20:04:09.364503   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 110/120
	I0708 20:04:10.365854   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 111/120
	I0708 20:04:11.367220   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 112/120
	I0708 20:04:12.368783   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 113/120
	I0708 20:04:13.370063   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 114/120
	I0708 20:04:14.372030   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 115/120
	I0708 20:04:15.373482   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 116/120
	I0708 20:04:16.375124   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 117/120
	I0708 20:04:17.376778   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 118/120
	I0708 20:04:18.378172   31357 main.go:141] libmachine: (ha-511021-m03) Waiting for machine to stop 119/120
	I0708 20:04:19.379067   31357 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0708 20:04:19.379147   31357 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0708 20:04:19.381008   31357 out.go:177] 
	W0708 20:04:19.382282   31357 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0708 20:04:19.382298   31357 out.go:239] * 
	* 
	W0708 20:04:19.385473   31357 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:04:19.387099   31357 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-511021 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-511021 --wait=true -v=7 --alsologtostderr
E0708 20:04:23.843783   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 20:04:51.530887   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 20:06:29.733214   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-511021 --wait=true -v=7 --alsologtostderr: (4m37.889901827s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-511021
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-511021 -n ha-511021
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-511021 logs -n 25: (1.868109368s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m02:/home/docker/cp-test_ha-511021-m03_ha-511021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m02 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /home/docker/cp-test_ha-511021-m03_ha-511021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m04:/home/docker/cp-test_ha-511021-m03_ha-511021-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m04 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /home/docker/cp-test_ha-511021-m03_ha-511021-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp testdata/cp-test.txt                                                | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3985602198/001/cp-test_ha-511021-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021:/home/docker/cp-test_ha-511021-m04_ha-511021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021 sudo cat                                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m02:/home/docker/cp-test_ha-511021-m04_ha-511021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m02 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m03:/home/docker/cp-test_ha-511021-m04_ha-511021-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m03 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-511021 node stop m02 -v=7                                                     | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-511021 node start m02 -v=7                                                    | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-511021 -v=7                                                           | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-511021 -v=7                                                                | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-511021 --wait=true -v=7                                                    | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:04 UTC | 08 Jul 24 20:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-511021                                                                | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:08 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:04:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:04:19.433891   31820 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:04:19.434119   31820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:04:19.434130   31820 out.go:304] Setting ErrFile to fd 2...
	I0708 20:04:19.434135   31820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:04:19.434313   31820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:04:19.434835   31820 out.go:298] Setting JSON to false
	I0708 20:04:19.435748   31820 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2808,"bootTime":1720466251,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:04:19.435808   31820 start.go:139] virtualization: kvm guest
	I0708 20:04:19.438977   31820 out.go:177] * [ha-511021] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:04:19.440567   31820 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:04:19.440572   31820 notify.go:220] Checking for updates...
	I0708 20:04:19.442375   31820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:04:19.443971   31820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:04:19.445439   31820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:04:19.446678   31820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:04:19.448014   31820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:04:19.449601   31820 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:04:19.449687   31820 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:04:19.450116   31820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:04:19.450166   31820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:04:19.465702   31820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0708 20:04:19.466129   31820 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:04:19.466637   31820 main.go:141] libmachine: Using API Version  1
	I0708 20:04:19.466661   31820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:04:19.467039   31820 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:04:19.467215   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:04:19.504051   31820 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:04:19.505519   31820 start.go:297] selected driver: kvm2
	I0708 20:04:19.505533   31820 start.go:901] validating driver "kvm2" against &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.205 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:04:19.505732   31820 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:04:19.506179   31820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:04:19.506252   31820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:04:19.521503   31820 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:04:19.522246   31820 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:04:19.522324   31820 cni.go:84] Creating CNI manager for ""
	I0708 20:04:19.522337   31820 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0708 20:04:19.522424   31820 start.go:340] cluster config:
	{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.205 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:04:19.522566   31820 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:04:19.524478   31820 out.go:177] * Starting "ha-511021" primary control-plane node in "ha-511021" cluster
	I0708 20:04:19.525717   31820 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:04:19.525747   31820 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 20:04:19.525757   31820 cache.go:56] Caching tarball of preloaded images
	I0708 20:04:19.525832   31820 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:04:19.525844   31820 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 20:04:19.525956   31820 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 20:04:19.526133   31820 start.go:360] acquireMachinesLock for ha-511021: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:04:19.526169   31820 start.go:364] duration metric: took 19.997µs to acquireMachinesLock for "ha-511021"
	I0708 20:04:19.526182   31820 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:04:19.526193   31820 fix.go:54] fixHost starting: 
	I0708 20:04:19.526435   31820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:04:19.526463   31820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:04:19.541137   31820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0708 20:04:19.541532   31820 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:04:19.542033   31820 main.go:141] libmachine: Using API Version  1
	I0708 20:04:19.542052   31820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:04:19.542369   31820 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:04:19.542542   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:04:19.542706   31820 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:04:19.544292   31820 fix.go:112] recreateIfNeeded on ha-511021: state=Running err=<nil>
	W0708 20:04:19.544309   31820 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:04:19.546309   31820 out.go:177] * Updating the running kvm2 "ha-511021" VM ...
	I0708 20:04:19.547690   31820 machine.go:94] provisionDockerMachine start ...
	I0708 20:04:19.547710   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:04:19.547959   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:19.550245   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.550621   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:19.550646   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.550810   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:04:19.550990   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.551159   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.551274   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:04:19.551434   31820 main.go:141] libmachine: Using SSH client type: native
	I0708 20:04:19.551647   31820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 20:04:19.551660   31820 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:04:19.666727   31820 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-511021
	
	I0708 20:04:19.666764   31820 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 20:04:19.667062   31820 buildroot.go:166] provisioning hostname "ha-511021"
	I0708 20:04:19.667084   31820 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 20:04:19.667285   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:19.669795   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.670211   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:19.670241   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.670404   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:04:19.670596   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.670736   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.670866   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:04:19.671005   31820 main.go:141] libmachine: Using SSH client type: native
	I0708 20:04:19.671170   31820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 20:04:19.671187   31820 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-511021 && echo "ha-511021" | sudo tee /etc/hostname
	I0708 20:04:19.795732   31820 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-511021
	
	I0708 20:04:19.795767   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:19.798619   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.799001   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:19.799144   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.799211   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:04:19.799400   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.799607   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.799735   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:04:19.799884   31820 main.go:141] libmachine: Using SSH client type: native
	I0708 20:04:19.800048   31820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 20:04:19.800063   31820 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-511021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-511021/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-511021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:04:19.912550   31820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:04:19.912577   31820 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:04:19.912604   31820 buildroot.go:174] setting up certificates
	I0708 20:04:19.912612   31820 provision.go:84] configureAuth start
	I0708 20:04:19.912619   31820 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 20:04:19.912886   31820 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:04:19.915407   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.915763   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:19.915783   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.916004   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:19.918348   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.918823   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:19.918846   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.919004   31820 provision.go:143] copyHostCerts
	I0708 20:04:19.919029   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:04:19.919056   31820 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:04:19.919097   31820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:04:19.919164   31820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:04:19.919255   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:04:19.919272   31820 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:04:19.919282   31820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:04:19.919309   31820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:04:19.919360   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:04:19.919376   31820 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:04:19.919382   31820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:04:19.919401   31820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:04:19.919483   31820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.ha-511021 san=[127.0.0.1 192.168.39.33 ha-511021 localhost minikube]
	I0708 20:04:20.075593   31820 provision.go:177] copyRemoteCerts
	I0708 20:04:20.075652   31820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:04:20.075673   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:20.078518   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:20.078866   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:20.078894   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:20.079035   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:04:20.079216   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:20.079335   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:04:20.079512   31820 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:04:20.169045   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 20:04:20.169115   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:04:20.196182   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 20:04:20.196265   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0708 20:04:20.222424   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 20:04:20.222483   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:04:20.257172   31820 provision.go:87] duration metric: took 344.546164ms to configureAuth
	I0708 20:04:20.257207   31820 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:04:20.257450   31820 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:04:20.257520   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:20.260439   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:20.260857   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:20.260885   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:20.261077   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:04:20.261304   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:20.261484   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:20.261660   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:04:20.261856   31820 main.go:141] libmachine: Using SSH client type: native
	I0708 20:04:20.262034   31820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 20:04:20.262049   31820 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:05:51.087814   31820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:05:51.087843   31820 machine.go:97] duration metric: took 1m31.540138601s to provisionDockerMachine
	I0708 20:05:51.087860   31820 start.go:293] postStartSetup for "ha-511021" (driver="kvm2")
	I0708 20:05:51.087871   31820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:05:51.087887   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.088215   31820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:05:51.088249   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:05:51.091430   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.091926   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.091957   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.092151   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:05:51.092357   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.092529   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:05:51.092693   31820 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:05:51.179683   31820 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:05:51.184280   31820 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:05:51.184307   31820 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:05:51.184368   31820 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:05:51.184463   31820 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:05:51.184476   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /etc/ssl/certs/131412.pem
	I0708 20:05:51.184588   31820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:05:51.194759   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:05:51.219941   31820 start.go:296] duration metric: took 132.066981ms for postStartSetup
	I0708 20:05:51.219984   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.220286   31820 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0708 20:05:51.220320   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:05:51.223247   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.223698   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.223722   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.223940   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:05:51.224125   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.224249   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:05:51.224346   31820 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	W0708 20:05:51.312352   31820 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0708 20:05:51.312379   31820 fix.go:56] duration metric: took 1m31.786189185s for fixHost
	I0708 20:05:51.312400   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:05:51.315061   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.315423   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.315461   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.315720   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:05:51.316014   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.316171   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.316286   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:05:51.316441   31820 main.go:141] libmachine: Using SSH client type: native
	I0708 20:05:51.316595   31820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 20:05:51.316605   31820 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:05:51.424426   31820 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720469151.369052026
	
	I0708 20:05:51.424447   31820 fix.go:216] guest clock: 1720469151.369052026
	I0708 20:05:51.424457   31820 fix.go:229] Guest: 2024-07-08 20:05:51.369052026 +0000 UTC Remote: 2024-07-08 20:05:51.312387328 +0000 UTC m=+91.916293259 (delta=56.664698ms)
	I0708 20:05:51.424497   31820 fix.go:200] guest clock delta is within tolerance: 56.664698ms
	I0708 20:05:51.424503   31820 start.go:83] releasing machines lock for "ha-511021", held for 1m31.898325471s
	I0708 20:05:51.424528   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.424775   31820 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:05:51.427789   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.428154   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.428182   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.428353   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.428828   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.428995   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.429091   31820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:05:51.429128   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:05:51.429195   31820 ssh_runner.go:195] Run: cat /version.json
	I0708 20:05:51.429219   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:05:51.431850   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.432270   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.432307   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.432325   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.432494   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:05:51.432678   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.432752   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.432774   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.432837   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:05:51.432949   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:05:51.433014   31820 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:05:51.433102   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.433244   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:05:51.433371   31820 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:05:51.512909   31820 ssh_runner.go:195] Run: systemctl --version
	I0708 20:05:51.541129   31820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:05:51.706403   31820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:05:51.718232   31820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:05:51.718290   31820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:05:51.727852   31820 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0708 20:05:51.727880   31820 start.go:494] detecting cgroup driver to use...
	I0708 20:05:51.727940   31820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:05:51.743918   31820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:05:51.758176   31820 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:05:51.758256   31820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:05:51.772317   31820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:05:51.785878   31820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:05:51.937937   31820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:05:52.099749   31820 docker.go:233] disabling docker service ...
	I0708 20:05:52.099818   31820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:05:52.120179   31820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:05:52.134858   31820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:05:52.282618   31820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:05:52.438209   31820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:05:52.452316   31820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:05:52.472171   31820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:05:52.472242   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.483334   31820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:05:52.483412   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.494490   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.505472   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.516573   31820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:05:52.527809   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.538778   31820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.550272   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.561862   31820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:05:52.572250   31820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:05:52.582227   31820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:05:52.731897   31820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:05:59.937325   31820 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.205385672s)
	I0708 20:05:59.937352   31820 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:05:59.937396   31820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:05:59.942900   31820 start.go:562] Will wait 60s for crictl version
	I0708 20:05:59.942959   31820 ssh_runner.go:195] Run: which crictl
	I0708 20:05:59.946832   31820 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:05:59.990094   31820 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:05:59.990186   31820 ssh_runner.go:195] Run: crio --version
	I0708 20:06:00.020049   31820 ssh_runner.go:195] Run: crio --version
	I0708 20:06:00.053548   31820 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:06:00.054767   31820 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:06:00.057518   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:06:00.057890   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:06:00.057914   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:06:00.058127   31820 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:06:00.063257   31820 kubeadm.go:877] updating cluster {Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.205 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:06:00.063438   31820 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:06:00.063511   31820 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:06:00.106872   31820 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:06:00.106894   31820 crio.go:433] Images already preloaded, skipping extraction
	I0708 20:06:00.106940   31820 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:06:00.146952   31820 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:06:00.146978   31820 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:06:00.146987   31820 kubeadm.go:928] updating node { 192.168.39.33 8443 v1.30.2 crio true true} ...
	I0708 20:06:00.147087   31820 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-511021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:06:00.147149   31820 ssh_runner.go:195] Run: crio config
	I0708 20:06:00.203044   31820 cni.go:84] Creating CNI manager for ""
	I0708 20:06:00.203319   31820 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0708 20:06:00.203332   31820 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:06:00.203363   31820 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.33 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-511021 NodeName:ha-511021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:06:00.203547   31820 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-511021"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:06:00.203577   31820 kube-vip.go:115] generating kube-vip config ...
	I0708 20:06:00.203628   31820 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0708 20:06:00.216056   31820 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0708 20:06:00.216179   31820 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0708 20:06:00.216244   31820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:06:00.226088   31820 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:06:00.226159   31820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0708 20:06:00.236923   31820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0708 20:06:00.254962   31820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:06:00.273144   31820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0708 20:06:00.291210   31820 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0708 20:06:00.310081   31820 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0708 20:06:00.314454   31820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:06:00.463339   31820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:06:00.478399   31820 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021 for IP: 192.168.39.33
	I0708 20:06:00.478423   31820 certs.go:194] generating shared ca certs ...
	I0708 20:06:00.478443   31820 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:06:00.478591   31820 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:06:00.478640   31820 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:06:00.478648   31820 certs.go:256] generating profile certs ...
	I0708 20:06:00.478728   31820 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key
	I0708 20:06:00.478759   31820 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a35ec44e
	I0708 20:06:00.478775   31820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a35ec44e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.33 192.168.39.216 192.168.39.70 192.168.39.254]
	I0708 20:06:00.571186   31820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a35ec44e ...
	I0708 20:06:00.571218   31820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a35ec44e: {Name:mk238071fcb109f666cf0ada333a915684a72d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:06:00.571386   31820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a35ec44e ...
	I0708 20:06:00.571396   31820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a35ec44e: {Name:mkf31cd3a0fa10858e99ac8972f3ab7373aa3fc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:06:00.571486   31820 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a35ec44e -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt
	I0708 20:06:00.571618   31820 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a35ec44e -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key
	I0708 20:06:00.571748   31820 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key
	I0708 20:06:00.571767   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 20:06:00.571782   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 20:06:00.571799   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 20:06:00.571812   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 20:06:00.571822   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 20:06:00.571833   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 20:06:00.571844   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 20:06:00.571857   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 20:06:00.571914   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:06:00.571944   31820 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:06:00.571952   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:06:00.571972   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:06:00.571995   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:06:00.572015   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:06:00.572050   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:06:00.572074   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:06:00.572088   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem -> /usr/share/ca-certificates/13141.pem
	I0708 20:06:00.572100   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /usr/share/ca-certificates/131412.pem
	I0708 20:06:00.572665   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:06:00.599701   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:06:00.624931   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:06:00.649461   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:06:00.674557   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0708 20:06:00.699217   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 20:06:00.723772   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:06:00.749601   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:06:00.775906   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:06:00.802649   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:06:00.829779   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:06:00.858623   31820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:06:00.877541   31820 ssh_runner.go:195] Run: openssl version
	I0708 20:06:00.884120   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:06:00.895954   31820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:06:00.901215   31820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:06:00.901283   31820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:06:00.907534   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:06:00.918094   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:06:00.931616   31820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:06:00.937167   31820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:06:00.937236   31820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:06:00.943312   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:06:00.953330   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:06:00.964710   31820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:06:00.970009   31820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:06:00.970089   31820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:06:00.976408   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:06:00.987139   31820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:06:00.992480   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:06:00.998585   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:06:01.005031   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:06:01.011228   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:06:01.017338   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:06:01.023844   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:06:01.030120   31820 kubeadm.go:391] StartCluster: {Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.205 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:06:01.030228   31820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:06:01.030294   31820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:06:01.081010   31820 cri.go:89] found id: "07b1e06f2165b9a75c0179c4493d83cdf879cdcfbc5962391d44f2a78f573e14"
	I0708 20:06:01.081032   31820 cri.go:89] found id: "10819bc348798228cb925cfd626dd580cd269711d9fb52b5386026c657c7a2c5"
	I0708 20:06:01.081036   31820 cri.go:89] found id: "08da972caef161c88bc90649163dc4eaaa5cc7a0a9f60dd1e9f124634d88a270"
	I0708 20:06:01.081039   31820 cri.go:89] found id: "693a49012ffbe0f1af1ebb92fcad97b83ab34e0d244582a1e7ad6e2a12e4698a"
	I0708 20:06:01.081043   31820 cri.go:89] found id: "6e2b3c8d333ac8c5ad3ee8d4a9f8ff6fbb41287e55928605d7d49ae153738db2"
	I0708 20:06:01.081047   31820 cri.go:89] found id: "6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa"
	I0708 20:06:01.081051   31820 cri.go:89] found id: "499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7"
	I0708 20:06:01.081055   31820 cri.go:89] found id: "ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9"
	I0708 20:06:01.081059   31820 cri.go:89] found id: "67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19"
	I0708 20:06:01.081066   31820 cri.go:89] found id: "dd8ad312a5acddb79be337823087ee2b87d36262359d11cd3661e4a31d3026ec"
	I0708 20:06:01.081070   31820 cri.go:89] found id: "08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e"
	I0708 20:06:01.081075   31820 cri.go:89] found id: "0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67"
	I0708 20:06:01.081079   31820 cri.go:89] found id: "019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9"
	I0708 20:06:01.081083   31820 cri.go:89] found id: "e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586"
	I0708 20:06:01.081088   31820 cri.go:89] found id: ""
	I0708 20:06:01.081128   31820 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 08 20:08:57 ha-511021 crio[3868]: time="2024-07-08 20:08:57.952535460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720469337952506013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db966062-bc94-414e-a179-7c1b93ea327c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:08:57 ha-511021 crio[3868]: time="2024-07-08 20:08:57.953428911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91703f49-a4a4-441d-a4a6-24d842c3a417 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:57 ha-511021 crio[3868]: time="2024-07-08 20:08:57.953511753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91703f49-a4a4-441d-a4a6-24d842c3a417 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:57 ha-511021 crio[3868]: time="2024-07-08 20:08:57.954501263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e9a087907d7c028ca0b7d30efd5d52a3aa4d4ec1c01d4694ce9f29a6ccff49,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720469257887325435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a092bcfc2c4cf52b3a7a13ad5de69f2705f9f47507b1ff3c846fd063dc62b0e,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720469225892720085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad6fd7c3f9cad31104529097c8feeb16ff0c5ce58c2ed27a50b3743232c0bc5,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720469210892212686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb8ddfc4919dff163e345f60e168e06f35c9d2988df41561e920c4448bd8fed,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720469208901888135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec27cea09fe4b6c1702ac07555fb0dc3e8a50de265f5516597a359c8e5efa4e,PodSandboxId:f97c5267622e6708415275ef934c949e657a2f8147e4826cab37b534dc64d8e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720469200285006902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3fc8ef1e0299d99ef60bf4fbeae19194d5c36940ec08ad10e6ce0ce357c232,PodSandboxId:f586e1626531019f80ebbd1a8ced37f08e948582fa0190d1ec4231539ca1986b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720469178439433586,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29a9ed466df566b5a45a87a004582e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6dea00926f165df26d06c6421a15f2c6f0124a7ee17dcff8893fa517b3e434a7,PodSandboxId:1c8757727c0796600d9c33cd7b1d60eb582f2a8a8d270a0a592c16485c6b1184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720469167055477055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:80d3a01323446653a7398eff9a324e1447553ba76ff841a403de2c956bcfd4ba,PodSandboxId:15bfe51f73f1d04fcb452c2b9823a6053077a02ca13b7b7df4a96dbe1c4bf4d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469167160050965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720469167067135118,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720469166796838413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f47fb0f400e915295b2ec21e227b8000e1936d00aa1e9265345bcf18da00776,PodSandboxId:fe17ace71d58c0de7ba910b637efe0025726e7dccf5a0e662e230cc9592510be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469166944613398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2303835cb3470ac48e1c2f7eeacbd0c55e180b7acf710d2929e5f1f7c987570,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720469166990626313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f3
82d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c97c9bf4b2ba515d9c57ff1ad82fdc07c3fa398efe0f30e200eeb4afa9b8b6d,PodSandboxId:933bc29f90e9808f147ef51a67f08bb1bc76f5e51a4380d2fe5089323f512648,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720469166802683027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string
]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59de9e6a107817af76862bda008f35a5bdbc9c446829a20e23b865829f0e4faa,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720469166736092660,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bde8b17ea0c0a6fdba42f0b205c7d9bcbc19c9c1b529fc4a8f65bd2e6c9c994,PodSandboxId:93bc3377fc8a32869f1698a0c90a2260b2d53df153fb28fe29e1ab8bebe272dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720469166726621718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720468678300626732,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernet
es.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535991336335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535981042931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f
6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720468532672426940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d76
91a75a899,State:CONTAINER_EXITED,CreatedAt:1720468512224119753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt
:1720468512188864425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91703f49-a4a4-441d-a4a6-24d842c3a417 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.002115304Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=387f25ac-2edc-4ece-80db-efdea83c8a17 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.002216694Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=387f25ac-2edc-4ece-80db-efdea83c8a17 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.003394701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f30937a1-86b1-4f2a-a27f-748bd993f8cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.003895466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720469338003871197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f30937a1-86b1-4f2a-a27f-748bd993f8cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.004568517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eea86ab1-44c0-4831-9e7e-cc2717b74f00 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.004628945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eea86ab1-44c0-4831-9e7e-cc2717b74f00 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.005111218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e9a087907d7c028ca0b7d30efd5d52a3aa4d4ec1c01d4694ce9f29a6ccff49,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720469257887325435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a092bcfc2c4cf52b3a7a13ad5de69f2705f9f47507b1ff3c846fd063dc62b0e,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720469225892720085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad6fd7c3f9cad31104529097c8feeb16ff0c5ce58c2ed27a50b3743232c0bc5,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720469210892212686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb8ddfc4919dff163e345f60e168e06f35c9d2988df41561e920c4448bd8fed,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720469208901888135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec27cea09fe4b6c1702ac07555fb0dc3e8a50de265f5516597a359c8e5efa4e,PodSandboxId:f97c5267622e6708415275ef934c949e657a2f8147e4826cab37b534dc64d8e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720469200285006902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3fc8ef1e0299d99ef60bf4fbeae19194d5c36940ec08ad10e6ce0ce357c232,PodSandboxId:f586e1626531019f80ebbd1a8ced37f08e948582fa0190d1ec4231539ca1986b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720469178439433586,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29a9ed466df566b5a45a87a004582e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6dea00926f165df26d06c6421a15f2c6f0124a7ee17dcff8893fa517b3e434a7,PodSandboxId:1c8757727c0796600d9c33cd7b1d60eb582f2a8a8d270a0a592c16485c6b1184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720469167055477055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:80d3a01323446653a7398eff9a324e1447553ba76ff841a403de2c956bcfd4ba,PodSandboxId:15bfe51f73f1d04fcb452c2b9823a6053077a02ca13b7b7df4a96dbe1c4bf4d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469167160050965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720469167067135118,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720469166796838413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f47fb0f400e915295b2ec21e227b8000e1936d00aa1e9265345bcf18da00776,PodSandboxId:fe17ace71d58c0de7ba910b637efe0025726e7dccf5a0e662e230cc9592510be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469166944613398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2303835cb3470ac48e1c2f7eeacbd0c55e180b7acf710d2929e5f1f7c987570,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720469166990626313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f3
82d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c97c9bf4b2ba515d9c57ff1ad82fdc07c3fa398efe0f30e200eeb4afa9b8b6d,PodSandboxId:933bc29f90e9808f147ef51a67f08bb1bc76f5e51a4380d2fe5089323f512648,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720469166802683027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string
]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59de9e6a107817af76862bda008f35a5bdbc9c446829a20e23b865829f0e4faa,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720469166736092660,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bde8b17ea0c0a6fdba42f0b205c7d9bcbc19c9c1b529fc4a8f65bd2e6c9c994,PodSandboxId:93bc3377fc8a32869f1698a0c90a2260b2d53df153fb28fe29e1ab8bebe272dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720469166726621718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720468678300626732,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernet
es.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535991336335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535981042931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f
6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720468532672426940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d76
91a75a899,State:CONTAINER_EXITED,CreatedAt:1720468512224119753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt
:1720468512188864425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eea86ab1-44c0-4831-9e7e-cc2717b74f00 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.057068263Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d373263-4fc2-41c9-9f0f-f5de7dbad571 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.057152604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d373263-4fc2-41c9-9f0f-f5de7dbad571 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.060340478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01ddd573-9727-48d3-ae33-fb4ffd2e0e0e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.063503343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720469338063391317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01ddd573-9727-48d3-ae33-fb4ffd2e0e0e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.071647365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=268c24cc-f56a-40c3-af39-9772e8514e32 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.071734004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=268c24cc-f56a-40c3-af39-9772e8514e32 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.072238136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e9a087907d7c028ca0b7d30efd5d52a3aa4d4ec1c01d4694ce9f29a6ccff49,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720469257887325435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a092bcfc2c4cf52b3a7a13ad5de69f2705f9f47507b1ff3c846fd063dc62b0e,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720469225892720085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad6fd7c3f9cad31104529097c8feeb16ff0c5ce58c2ed27a50b3743232c0bc5,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720469210892212686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb8ddfc4919dff163e345f60e168e06f35c9d2988df41561e920c4448bd8fed,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720469208901888135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec27cea09fe4b6c1702ac07555fb0dc3e8a50de265f5516597a359c8e5efa4e,PodSandboxId:f97c5267622e6708415275ef934c949e657a2f8147e4826cab37b534dc64d8e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720469200285006902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3fc8ef1e0299d99ef60bf4fbeae19194d5c36940ec08ad10e6ce0ce357c232,PodSandboxId:f586e1626531019f80ebbd1a8ced37f08e948582fa0190d1ec4231539ca1986b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720469178439433586,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29a9ed466df566b5a45a87a004582e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6dea00926f165df26d06c6421a15f2c6f0124a7ee17dcff8893fa517b3e434a7,PodSandboxId:1c8757727c0796600d9c33cd7b1d60eb582f2a8a8d270a0a592c16485c6b1184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720469167055477055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:80d3a01323446653a7398eff9a324e1447553ba76ff841a403de2c956bcfd4ba,PodSandboxId:15bfe51f73f1d04fcb452c2b9823a6053077a02ca13b7b7df4a96dbe1c4bf4d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469167160050965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720469167067135118,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720469166796838413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f47fb0f400e915295b2ec21e227b8000e1936d00aa1e9265345bcf18da00776,PodSandboxId:fe17ace71d58c0de7ba910b637efe0025726e7dccf5a0e662e230cc9592510be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469166944613398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2303835cb3470ac48e1c2f7eeacbd0c55e180b7acf710d2929e5f1f7c987570,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720469166990626313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f3
82d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c97c9bf4b2ba515d9c57ff1ad82fdc07c3fa398efe0f30e200eeb4afa9b8b6d,PodSandboxId:933bc29f90e9808f147ef51a67f08bb1bc76f5e51a4380d2fe5089323f512648,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720469166802683027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string
]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59de9e6a107817af76862bda008f35a5bdbc9c446829a20e23b865829f0e4faa,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720469166736092660,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bde8b17ea0c0a6fdba42f0b205c7d9bcbc19c9c1b529fc4a8f65bd2e6c9c994,PodSandboxId:93bc3377fc8a32869f1698a0c90a2260b2d53df153fb28fe29e1ab8bebe272dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720469166726621718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720468678300626732,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernet
es.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535991336335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535981042931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f
6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720468532672426940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d76
91a75a899,State:CONTAINER_EXITED,CreatedAt:1720468512224119753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt
:1720468512188864425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=268c24cc-f56a-40c3-af39-9772e8514e32 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.121229874Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ce0af29-5d11-40dc-bd58-2cf34aff9f2e name=/runtime.v1.RuntimeService/Version
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.121360540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ce0af29-5d11-40dc-bd58-2cf34aff9f2e name=/runtime.v1.RuntimeService/Version
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.122994414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb93442b-6dcc-4953-837b-84399321c087 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.123599382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720469338123570056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb93442b-6dcc-4953-837b-84399321c087 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.124307252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2df844f7-5531-402e-bacc-f8073acb4078 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.124370806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2df844f7-5531-402e-bacc-f8073acb4078 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:08:58 ha-511021 crio[3868]: time="2024-07-08 20:08:58.124948835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e9a087907d7c028ca0b7d30efd5d52a3aa4d4ec1c01d4694ce9f29a6ccff49,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720469257887325435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a092bcfc2c4cf52b3a7a13ad5de69f2705f9f47507b1ff3c846fd063dc62b0e,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720469225892720085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad6fd7c3f9cad31104529097c8feeb16ff0c5ce58c2ed27a50b3743232c0bc5,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720469210892212686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb8ddfc4919dff163e345f60e168e06f35c9d2988df41561e920c4448bd8fed,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720469208901888135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec27cea09fe4b6c1702ac07555fb0dc3e8a50de265f5516597a359c8e5efa4e,PodSandboxId:f97c5267622e6708415275ef934c949e657a2f8147e4826cab37b534dc64d8e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720469200285006902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3fc8ef1e0299d99ef60bf4fbeae19194d5c36940ec08ad10e6ce0ce357c232,PodSandboxId:f586e1626531019f80ebbd1a8ced37f08e948582fa0190d1ec4231539ca1986b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720469178439433586,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29a9ed466df566b5a45a87a004582e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6dea00926f165df26d06c6421a15f2c6f0124a7ee17dcff8893fa517b3e434a7,PodSandboxId:1c8757727c0796600d9c33cd7b1d60eb582f2a8a8d270a0a592c16485c6b1184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720469167055477055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:80d3a01323446653a7398eff9a324e1447553ba76ff841a403de2c956bcfd4ba,PodSandboxId:15bfe51f73f1d04fcb452c2b9823a6053077a02ca13b7b7df4a96dbe1c4bf4d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469167160050965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720469167067135118,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720469166796838413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f47fb0f400e915295b2ec21e227b8000e1936d00aa1e9265345bcf18da00776,PodSandboxId:fe17ace71d58c0de7ba910b637efe0025726e7dccf5a0e662e230cc9592510be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469166944613398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2303835cb3470ac48e1c2f7eeacbd0c55e180b7acf710d2929e5f1f7c987570,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720469166990626313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f3
82d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c97c9bf4b2ba515d9c57ff1ad82fdc07c3fa398efe0f30e200eeb4afa9b8b6d,PodSandboxId:933bc29f90e9808f147ef51a67f08bb1bc76f5e51a4380d2fe5089323f512648,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720469166802683027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string
]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59de9e6a107817af76862bda008f35a5bdbc9c446829a20e23b865829f0e4faa,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720469166736092660,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bde8b17ea0c0a6fdba42f0b205c7d9bcbc19c9c1b529fc4a8f65bd2e6c9c994,PodSandboxId:93bc3377fc8a32869f1698a0c90a2260b2d53df153fb28fe29e1ab8bebe272dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720469166726621718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720468678300626732,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernet
es.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535991336335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535981042931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f
6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720468532672426940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d76
91a75a899,State:CONTAINER_EXITED,CreatedAt:1720468512224119753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt
:1720468512188864425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2df844f7-5531-402e-bacc-f8073acb4078 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d7e9a087907d7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       5                   f8007a8b85880       storage-provisioner
	9a092bcfc2c4c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               3                   931eb703b9122       kindnet-4f49v
	8ad6fd7c3f9ca       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      2 minutes ago        Running             kube-controller-manager   2                   a6e9ec1666c2b       kube-controller-manager-ha-511021
	ffb8ddfc4919d       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      2 minutes ago        Running             kube-apiserver            3                   25a60047a3471       kube-apiserver-ha-511021
	9ec27cea09fe4       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   f97c5267622e6       busybox-fc5497c4f-w8l78
	ad3fc8ef1e029       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   f586e16265310       kube-vip-ha-511021
	80d3a01323446       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   15bfe51f73f1d       coredns-7db6d8ff4d-w6m9c
	38802be5ddf5a       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      2 minutes ago        Exited              kindnet-cni               2                   931eb703b9122       kindnet-4f49v
	6dea00926f165       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      2 minutes ago        Running             kube-proxy                1                   1c8757727c079       kube-proxy-tmkjf
	a2303835cb347       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      2 minutes ago        Exited              kube-apiserver            2                   25a60047a3471       kube-apiserver-ha-511021
	4f47fb0f400e9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   fe17ace71d58c       coredns-7db6d8ff4d-4lzjf
	3c97c9bf4b2ba       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   933bc29f90e98       etcd-ha-511021
	6b4723de2bd2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   f8007a8b85880       storage-provisioner
	59de9e6a10781       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      2 minutes ago        Exited              kube-controller-manager   1                   a6e9ec1666c2b       kube-controller-manager-ha-511021
	7bde8b17ea0c0       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      2 minutes ago        Running             kube-scheduler            1                   93bc3377fc8a3       kube-scheduler-ha-511021
	f1ad4f76c216a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   b1cbe60f17e1a       busybox-fc5497c4f-w8l78
	6b083875d2679       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   a361ba0082084       coredns-7db6d8ff4d-w6m9c
	499dc5b41a3d6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   3765b2ad464be       coredns-7db6d8ff4d-4lzjf
	67153dce61aaa       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago       Exited              kube-proxy                0                   8cba18d6a0140       kube-proxy-tmkjf
	08189f5ac12ce       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   2e4a76498c1cf       etcd-ha-511021
	019d794c36af8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago       Exited              kube-scheduler            0                   bc2b7b56fb60f       kube-scheduler-ha-511021
	
	
	==> coredns [499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7] <==
	[INFO] 10.244.1.2:48742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218522s
	[INFO] 10.244.1.2:60141 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145244s
	[INFO] 10.244.0.4:58500 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001476805s
	[INFO] 10.244.0.4:53415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090934s
	[INFO] 10.244.0.4:60685 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159681s
	[INFO] 10.244.2.2:35117 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216541s
	[INFO] 10.244.2.2:56929 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000209242s
	[INFO] 10.244.2.2:57601 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099474s
	[INFO] 10.244.1.2:51767 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189518s
	[INFO] 10.244.1.2:53177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013929s
	[INFO] 10.244.0.4:44104 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095184s
	[INFO] 10.244.2.2:51012 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106956s
	[INFO] 10.244.2.2:37460 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124276s
	[INFO] 10.244.2.2:46238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124359s
	[INFO] 10.244.1.2:56514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153739s
	[INFO] 10.244.1.2:45870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000362406s
	[INFO] 10.244.0.4:54901 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101371s
	[INFO] 10.244.0.4:38430 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128119s
	[INFO] 10.244.0.4:59433 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112582s
	[INFO] 10.244.2.2:50495 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000089543s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4f47fb0f400e915295b2ec21e227b8000e1936d00aa1e9265345bcf18da00776] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[132853472]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (08-Jul-2024 20:06:21.688) (total time: 10402ms):
	Trace[132853472]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43864->10.96.0.1:443: read: connection reset by peer 10402ms (20:06:32.091)
	Trace[132853472]: [10.402828587s] [10.402828587s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa] <==
	[INFO] 10.244.0.4:45493 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011856s
	[INFO] 10.244.0.4:43450 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049467s
	[INFO] 10.244.0.4:42950 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177837s
	[INFO] 10.244.2.2:44783 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001772539s
	[INFO] 10.244.2.2:60536 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011424s
	[INFO] 10.244.2.2:56160 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090498s
	[INFO] 10.244.2.2:60942 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001479529s
	[INFO] 10.244.2.2:59066 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078767s
	[INFO] 10.244.1.2:33094 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298986s
	[INFO] 10.244.1.2:41194 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092808s
	[INFO] 10.244.0.4:44172 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168392s
	[INFO] 10.244.0.4:47644 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085824s
	[INFO] 10.244.0.4:45776 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131918s
	[INFO] 10.244.2.2:53642 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164258s
	[INFO] 10.244.1.2:32877 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000282103s
	[INFO] 10.244.1.2:59022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013901s
	[INFO] 10.244.0.4:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129873s
	[INFO] 10.244.2.2:48648 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161626s
	[INFO] 10.244.2.2:59172 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147702s
	[INFO] 10.244.2.2:45542 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156821s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [80d3a01323446653a7398eff9a324e1447553ba76ff841a403de2c956bcfd4ba] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1424619691]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (08-Jul-2024 20:06:16.235) (total time: 10000ms):
	Trace[1424619691]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (20:06:26.236)
	Trace[1424619691]: [10.000852418s] [10.000852418s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35526->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1982365158]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (08-Jul-2024 20:06:18.741) (total time: 13349ms):
	Trace[1982365158]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35526->10.96.0.1:443: read: connection reset by peer 13349ms (20:06:32.090)
	Trace[1982365158]: [13.349789332s] [13.349789332s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35526->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-511021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T19_55_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:55:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:08:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:06:47 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:06:47 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:06:47 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:06:47 +0000   Mon, 08 Jul 2024 19:55:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.33
	  Hostname:    ha-511021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b87893acdd9a476ea34795541f3789df
	  System UUID:                b87893ac-dd9a-476e-a347-95541f3789df
	  Boot ID:                    17494c0f-24c9-4604-bfc5-8f8d6538a4f6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w8l78              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-4lzjf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-w6m9c             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-511021                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-4f49v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-511021             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-511021    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-tmkjf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-511021             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-511021                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 2m9s   kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-511021 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-511021 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-511021 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-511021 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal   RegisteredNode           11m    node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Warning  ContainerGCFailed        3m40s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m4s   node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal   RegisteredNode           115s   node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal   RegisteredNode           31s    node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	
	
	Name:               ha-511021-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_56_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:08:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:07:31 +0000   Mon, 08 Jul 2024 20:06:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:07:31 +0000   Mon, 08 Jul 2024 20:06:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:07:31 +0000   Mon, 08 Jul 2024 20:06:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:07:31 +0000   Mon, 08 Jul 2024 20:06:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-511021-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 09ff24d6fb9848b0b108f4ecb99eedc3
	  System UUID:                09ff24d6-fb98-48b0-b108-f4ecb99eedc3
	  Boot ID:                    c44a5023-6fe5-4076-a69c-531dc15a7a1c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xjfx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-511021-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-gn8kn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-511021-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-511021-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-976tb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-511021-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-511021-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                    node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-511021-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-511021-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-511021-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  NodeNotReady             9m22s                  node-controller  Node ha-511021-m02 status is now: NodeNotReady
	  Normal  Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node ha-511021-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node ha-511021-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m36s (x7 over 2m36s)  kubelet          Node ha-511021-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m4s                   node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  RegisteredNode           115s                   node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  RegisteredNode           31s                    node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	
	
	Name:               ha-511021-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_57_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:57:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:08:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:08:30 +0000   Mon, 08 Jul 2024 20:08:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:08:30 +0000   Mon, 08 Jul 2024 20:08:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:08:30 +0000   Mon, 08 Jul 2024 20:08:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:08:30 +0000   Mon, 08 Jul 2024 20:08:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-511021-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a1265a3cabd4e6aae62914cc287dffa
	  System UUID:                8a1265a3-cabd-4e6a-ae62-914cc287dffa
	  Boot ID:                    b33ca7de-ce5e-43f6-821e-f5aeb13a82c8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-x9p75                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-511021-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-kfpzq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-511021-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-511021-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-scxw5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-511021-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-511021-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-511021-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-511021-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-511021-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	  Normal   RegisteredNode           2m4s               node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	  Normal   RegisteredNode           115s               node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	  Normal   NodeNotReady             84s                node-controller  Node ha-511021-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeNotReady             59s                kubelet          Node ha-511021-m03 status is now: NodeNotReady
	  Warning  Rebooted                 58s (x2 over 59s)  kubelet          Node ha-511021-m03 has been rebooted, boot id: b33ca7de-ce5e-43f6-821e-f5aeb13a82c8
	  Normal   NodeHasSufficientMemory  58s (x3 over 59s)  kubelet          Node ha-511021-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x3 over 59s)  kubelet          Node ha-511021-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x3 over 59s)  kubelet          Node ha-511021-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                58s                kubelet          Node ha-511021-m03 status is now: NodeReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-511021-m03 event: Registered Node ha-511021-m03 in Controller
	
	
	Name:               ha-511021-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_58_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:58:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:08:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:08:49 +0000   Mon, 08 Jul 2024 20:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:08:49 +0000   Mon, 08 Jul 2024 20:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:08:49 +0000   Mon, 08 Jul 2024 20:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:08:49 +0000   Mon, 08 Jul 2024 20:08:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    ha-511021-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef479bd2efc3487eb39d936b4399c97b
	  System UUID:                ef479bd2-efc3-487e-b39d-936b4399c97b
	  Boot ID:                    600c1f2b-1d13-4908-ad9e-08608ab905a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bbbp6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-7mb58    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-511021-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-511021-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-511021-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-511021-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m4s               node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   RegisteredNode           115s               node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   NodeNotReady             84s                node-controller  Node ha-511021-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-511021-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-511021-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-511021-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-511021-m04 has been rebooted, boot id: 600c1f2b-1d13-4908-ad9e-08608ab905a7
	  Normal   NodeReady                9s                 kubelet          Node ha-511021-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.119364] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.209787] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.142097] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.285009] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.308511] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.058301] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.483782] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.535916] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.022132] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.103961] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.289495] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.234845] kauditd_printk_skb: 72 callbacks suppressed
	[Jul 8 20:02] kauditd_printk_skb: 1 callbacks suppressed
	[Jul 8 20:05] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[  +0.148703] systemd-fstab-generator[3799]: Ignoring "noauto" option for root device
	[  +0.196171] systemd-fstab-generator[3813]: Ignoring "noauto" option for root device
	[  +0.142590] systemd-fstab-generator[3825]: Ignoring "noauto" option for root device
	[  +0.310749] systemd-fstab-generator[3853]: Ignoring "noauto" option for root device
	[Jul 8 20:06] systemd-fstab-generator[3954]: Ignoring "noauto" option for root device
	[  +0.084307] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.892132] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.268208] kauditd_printk_skb: 86 callbacks suppressed
	[ +17.095890] kauditd_printk_skb: 1 callbacks suppressed
	[ +20.309309] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.477460] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e] <==
	{"level":"info","ts":"2024-07-08T20:04:20.433963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T20:04:20.434013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T20:04:20.434027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c received MsgPreVoteResp from 578695e7c923614c at term 2"}
	{"level":"info","ts":"2024-07-08T20:04:20.434041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c [logterm: 2, index: 2131] sent MsgPreVote request to 6e4a8f4a221cc134 at term 2"}
	{"level":"info","ts":"2024-07-08T20:04:20.434048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c [logterm: 2, index: 2131] sent MsgPreVote request to 9075682618332c40 at term 2"}
	{"level":"warn","ts":"2024-07-08T20:04:20.461675Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.33:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T20:04:20.461859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.33:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-08T20:04:20.461969Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"578695e7c923614c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-08T20:04:20.462169Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462204Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462229Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462319Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462405Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462546Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462606Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462633Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.462721Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.462872Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.463092Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.463204Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.463316Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.463406Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.467132Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.33:2380"}
	{"level":"info","ts":"2024-07-08T20:04:20.467305Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.33:2380"}
	{"level":"info","ts":"2024-07-08T20:04:20.467338Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-511021","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.33:2380"],"advertise-client-urls":["https://192.168.39.33:2379"]}
	
	
	==> etcd [3c97c9bf4b2ba515d9c57ff1ad82fdc07c3fa398efe0f30e200eeb4afa9b8b6d] <==
	{"level":"warn","ts":"2024-07-08T20:07:54.548235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"9075682618332c40","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:07:54.647868Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"9075682618332c40","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:07:54.682673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"9075682618332c40","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:07:54.74766Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"9075682618332c40","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:07:54.808208Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"9075682618332c40","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:07:54.848093Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"9075682618332c40","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:07:54.918024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"9075682618332c40","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:07:54.920267Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"9075682618332c40","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:07:54.948208Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"9075682618332c40","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:07:54.986162Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"578695e7c923614c","from":"578695e7c923614c","remote-peer-id":"9075682618332c40","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-08T20:07:57.529119Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9075682618332c40","rtt":"0s","error":"dial tcp 192.168.39.70:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-08T20:07:57.529195Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9075682618332c40","rtt":"0s","error":"dial tcp 192.168.39.70:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-08T20:07:58.337433Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.70:2380/version","remote-member-id":"9075682618332c40","error":"Get \"https://192.168.39.70:2380/version\": dial tcp 192.168.39.70:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-08T20:07:58.33754Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9075682618332c40","error":"Get \"https://192.168.39.70:2380/version\": dial tcp 192.168.39.70:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-08T20:08:02.340046Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.70:2380/version","remote-member-id":"9075682618332c40","error":"Get \"https://192.168.39.70:2380/version\": dial tcp 192.168.39.70:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-08T20:08:02.340112Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9075682618332c40","error":"Get \"https://192.168.39.70:2380/version\": dial tcp 192.168.39.70:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-08T20:08:02.530309Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9075682618332c40","rtt":"0s","error":"dial tcp 192.168.39.70:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-08T20:08:02.530283Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9075682618332c40","rtt":"0s","error":"dial tcp 192.168.39.70:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-08T20:08:05.06212Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:08:05.062198Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:08:05.071384Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"578695e7c923614c","to":"9075682618332c40","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-08T20:08:05.071601Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:08:05.07812Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"578695e7c923614c","to":"9075682618332c40","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-08T20:08:05.078194Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:08:05.143939Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	
	
	==> kernel <==
	 20:08:58 up 14 min,  0 users,  load average: 0.27, 0.29, 0.19
	Linux ha-511021 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe] <==
	I0708 20:06:07.481556       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0708 20:06:07.546887       1 main.go:107] hostIP = 192.168.39.33
	podIP = 192.168.39.33
	I0708 20:06:07.547165       1 main.go:116] setting mtu 1500 for CNI 
	I0708 20:06:07.547259       1 main.go:146] kindnetd IP family: "ipv4"
	I0708 20:06:07.547311       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0708 20:06:17.796070       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0708 20:06:27.805052       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0708 20:06:29.017339       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0708 20:06:32.090291       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0708 20:06:35.161597       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [9a092bcfc2c4cf52b3a7a13ad5de69f2705f9f47507b1ff3c846fd063dc62b0e] <==
	I0708 20:08:26.876847       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:08:36.885629       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:08:36.885675       1 main.go:227] handling current node
	I0708 20:08:36.885689       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:08:36.885694       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:08:36.885853       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0708 20:08:36.885874       1 main.go:250] Node ha-511021-m03 has CIDR [10.244.2.0/24] 
	I0708 20:08:36.885928       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:08:36.885947       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:08:46.904762       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:08:46.904902       1 main.go:227] handling current node
	I0708 20:08:46.904931       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:08:46.904980       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:08:46.905193       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0708 20:08:46.905233       1 main.go:250] Node ha-511021-m03 has CIDR [10.244.2.0/24] 
	I0708 20:08:46.905361       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:08:46.905397       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:08:56.923307       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:08:56.923501       1 main.go:227] handling current node
	I0708 20:08:56.923531       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:08:56.923550       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:08:56.923705       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0708 20:08:56.923726       1 main.go:250] Node ha-511021-m03 has CIDR [10.244.2.0/24] 
	I0708 20:08:56.923782       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:08:56.923883       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a2303835cb3470ac48e1c2f7eeacbd0c55e180b7acf710d2929e5f1f7c987570] <==
	I0708 20:06:07.641731       1 options.go:221] external host was not specified, using 192.168.39.33
	I0708 20:06:07.646930       1 server.go:148] Version: v1.30.2
	I0708 20:06:07.646992       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:06:08.204087       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0708 20:06:08.209975       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 20:06:08.214079       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0708 20:06:08.214211       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0708 20:06:08.214484       1 instance.go:299] Using reconciler: lease
	W0708 20:06:28.204900       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0708 20:06:28.204901       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0708 20:06:28.215014       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [ffb8ddfc4919dff163e345f60e168e06f35c9d2988df41561e920c4448bd8fed] <==
	I0708 20:06:50.963645       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0708 20:06:50.963685       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0708 20:06:51.119393       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0708 20:06:51.127396       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 20:06:51.127470       1 policy_source.go:224] refreshing policies
	I0708 20:06:51.136539       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0708 20:06:51.136621       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0708 20:06:51.137564       1 shared_informer.go:320] Caches are synced for configmaps
	I0708 20:06:51.138210       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0708 20:06:51.145586       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0708 20:06:51.150888       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0708 20:06:51.148411       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0708 20:06:51.148449       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0708 20:06:51.151899       1 aggregator.go:165] initial CRD sync complete...
	I0708 20:06:51.151948       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 20:06:51.151973       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 20:06:51.152018       1 cache.go:39] Caches are synced for autoregister controller
	W0708 20:06:51.162536       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.216 192.168.39.70]
	I0708 20:06:51.164843       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 20:06:51.175970       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0708 20:06:51.182920       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0708 20:06:51.203083       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 20:06:51.954116       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0708 20:06:52.402084       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.216 192.168.39.33 192.168.39.70]
	W0708 20:07:02.408083       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.216 192.168.39.33]
	
	
	==> kube-controller-manager [59de9e6a107817af76862bda008f35a5bdbc9c446829a20e23b865829f0e4faa] <==
	I0708 20:06:07.924599       1 serving.go:380] Generated self-signed cert in-memory
	I0708 20:06:08.877167       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0708 20:06:08.877210       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:06:08.879203       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0708 20:06:08.879889       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0708 20:06:08.880018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 20:06:08.880106       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0708 20:06:29.222659       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.33:8443/healthz\": dial tcp 192.168.39.33:8443: connect: connection refused"
	
	
	==> kube-controller-manager [8ad6fd7c3f9cad31104529097c8feeb16ff0c5ce58c2ed27a50b3743232c0bc5] <==
	I0708 20:07:03.398092       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 20:07:03.412782       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0708 20:07:03.421535       1 shared_informer.go:320] Caches are synced for stateful set
	I0708 20:07:03.438498       1 shared_informer.go:320] Caches are synced for expand
	I0708 20:07:03.438660       1 shared_informer.go:320] Caches are synced for attach detach
	I0708 20:07:03.454586       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0708 20:07:03.927426       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 20:07:03.992885       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 20:07:03.992938       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0708 20:07:05.823018       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-dlg9v EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-dlg9v\": the object has been modified; please apply your changes to the latest version and try again"
	I0708 20:07:05.823281       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"830e686c-d9a8-4133-a3c6-9c22b7460346", APIVersion:"v1", ResourceVersion:"245", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-dlg9v EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-dlg9v": the object has been modified; please apply your changes to the latest version and try again
	I0708 20:07:05.837641       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="69.313846ms"
	I0708 20:07:05.837993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.127µs"
	I0708 20:07:09.399469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.69413ms"
	I0708 20:07:09.400242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.402µs"
	I0708 20:07:25.776101       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-dlg9v EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-dlg9v\": the object has been modified; please apply your changes to the latest version and try again"
	I0708 20:07:25.778478       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"830e686c-d9a8-4133-a3c6-9c22b7460346", APIVersion:"v1", ResourceVersion:"245", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-dlg9v EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-dlg9v": the object has been modified; please apply your changes to the latest version and try again
	I0708 20:07:25.782467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.132518ms"
	I0708 20:07:25.783441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="381.771µs"
	I0708 20:07:34.839038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.392446ms"
	I0708 20:07:34.839209       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.551µs"
	I0708 20:08:00.929668       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="138.08µs"
	I0708 20:08:20.243148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.264269ms"
	I0708 20:08:20.243342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.062µs"
	I0708 20:08:49.912939       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-511021-m04"
	
	
	==> kube-proxy [67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19] <==
	E0708 20:03:14.012071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:17.081591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:17.081877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:17.082488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:17.082626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:17.082875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:17.082977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:23.226892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:23.226960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:23.226892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:23.226990       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:23.227399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:23.227567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:32.441428       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:32.441505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:32.442463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:32.442532       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:38.586474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:38.586557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:47.802128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:47.803082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:57.017962       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:57.018369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:04:03.162461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:04:03.162528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [6dea00926f165df26d06c6421a15f2c6f0124a7ee17dcff8893fa517b3e434a7] <==
	I0708 20:06:08.319155       1 server_linux.go:69] "Using iptables proxy"
	E0708 20:06:09.114747       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-511021\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0708 20:06:12.186459       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-511021\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0708 20:06:15.258101       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-511021\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0708 20:06:21.404520       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-511021\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0708 20:06:30.617720       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-511021\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0708 20:06:49.326403       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.33"]
	I0708 20:06:49.374073       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 20:06:49.374152       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 20:06:49.374170       1 server_linux.go:165] "Using iptables Proxier"
	I0708 20:06:49.376645       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 20:06:49.376935       1 server.go:872] "Version info" version="v1.30.2"
	I0708 20:06:49.376966       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:06:49.378154       1 config.go:192] "Starting service config controller"
	I0708 20:06:49.378193       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 20:06:49.378589       1 config.go:101] "Starting endpoint slice config controller"
	I0708 20:06:49.378619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 20:06:49.379622       1 config.go:319] "Starting node config controller"
	I0708 20:06:49.379686       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 20:06:49.478909       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 20:06:49.479011       1 shared_informer.go:320] Caches are synced for service config
	I0708 20:06:49.482316       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9] <==
	E0708 20:04:17.672311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 20:04:17.958171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 20:04:17.958202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 20:04:17.997560       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 20:04:17.997654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 20:04:18.373136       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 20:04:18.373236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 20:04:18.550934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 20:04:18.551079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 20:04:18.712888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 20:04:18.712923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 20:04:18.842342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 20:04:18.842392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 20:04:19.130897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 20:04:19.130983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 20:04:19.242744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 20:04:19.242909       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 20:04:19.539582       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 20:04:19.539629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 20:04:20.278662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:04:20.278720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0708 20:04:20.355573       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0708 20:04:20.355741       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0708 20:04:20.355987       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0708 20:04:20.365654       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7bde8b17ea0c0a6fdba42f0b205c7d9bcbc19c9c1b529fc4a8f65bd2e6c9c994] <==
	W0708 20:06:45.014010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.33:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:45.014055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.33:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:46.920952       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.33:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:46.921065       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.33:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:47.804555       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.33:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:47.804693       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.33:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:47.855102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.33:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:47.855329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.33:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:48.046647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:48.046753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:48.310034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:48.310076       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:48.803510       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:48.803580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:48.937296       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.33:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:48.937334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.33:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:50.972766       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0708 20:06:50.985353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0708 20:06:50.974503       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 20:06:50.974678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:06:50.989674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:06:50.989648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 20:06:51.052250       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 20:06:51.052283       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0708 20:07:04.237737       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 20:06:51 ha-511021 kubelet[1369]: I0708 20:06:51.867742    1369 scope.go:117] "RemoveContainer" containerID="6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4"
	Jul 08 20:06:51 ha-511021 kubelet[1369]: E0708 20:06:51.868270    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7d02def4-3af1-4268-a8fa-072c6fd71c83)\"" pod="kube-system/storage-provisioner" podUID="7d02def4-3af1-4268-a8fa-072c6fd71c83"
	Jul 08 20:06:54 ha-511021 kubelet[1369]: I0708 20:06:54.867070    1369 scope.go:117] "RemoveContainer" containerID="38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe"
	Jul 08 20:06:54 ha-511021 kubelet[1369]: E0708 20:06:54.867371    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-4f49v_kube-system(1f0b50ca-73cb-4ffb-9676-09e3a28d7636)\"" pod="kube-system/kindnet-4f49v" podUID="1f0b50ca-73cb-4ffb-9676-09e3a28d7636"
	Jul 08 20:07:02 ha-511021 kubelet[1369]: I0708 20:07:02.867622    1369 scope.go:117] "RemoveContainer" containerID="6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4"
	Jul 08 20:07:02 ha-511021 kubelet[1369]: E0708 20:07:02.868499    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7d02def4-3af1-4268-a8fa-072c6fd71c83)\"" pod="kube-system/storage-provisioner" podUID="7d02def4-3af1-4268-a8fa-072c6fd71c83"
	Jul 08 20:07:05 ha-511021 kubelet[1369]: I0708 20:07:05.867757    1369 scope.go:117] "RemoveContainer" containerID="38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe"
	Jul 08 20:07:15 ha-511021 kubelet[1369]: I0708 20:07:15.866862    1369 scope.go:117] "RemoveContainer" containerID="6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4"
	Jul 08 20:07:15 ha-511021 kubelet[1369]: E0708 20:07:15.867095    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7d02def4-3af1-4268-a8fa-072c6fd71c83)\"" pod="kube-system/storage-provisioner" podUID="7d02def4-3af1-4268-a8fa-072c6fd71c83"
	Jul 08 20:07:18 ha-511021 kubelet[1369]: E0708 20:07:18.948124    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:07:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:07:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:07:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:07:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 20:07:25 ha-511021 kubelet[1369]: I0708 20:07:25.867312    1369 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-511021" podUID="c2d1c07a-51ae-4264-9fbc-fd7af40ac2d0"
	Jul 08 20:07:25 ha-511021 kubelet[1369]: I0708 20:07:25.890901    1369 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-511021"
	Jul 08 20:07:26 ha-511021 kubelet[1369]: I0708 20:07:26.867280    1369 scope.go:117] "RemoveContainer" containerID="6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4"
	Jul 08 20:07:26 ha-511021 kubelet[1369]: E0708 20:07:26.867613    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7d02def4-3af1-4268-a8fa-072c6fd71c83)\"" pod="kube-system/storage-provisioner" podUID="7d02def4-3af1-4268-a8fa-072c6fd71c83"
	Jul 08 20:07:37 ha-511021 kubelet[1369]: I0708 20:07:37.866997    1369 scope.go:117] "RemoveContainer" containerID="6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4"
	Jul 08 20:07:38 ha-511021 kubelet[1369]: I0708 20:07:38.781598    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-511021" podStartSLOduration=13.781553079 podStartE2EDuration="13.781553079s" podCreationTimestamp="2024-07-08 20:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 20:07:32.972574935 +0000 UTC m=+734.299229264" watchObservedRunningTime="2024-07-08 20:07:38.781553079 +0000 UTC m=+740.108207428"
	Jul 08 20:08:18 ha-511021 kubelet[1369]: E0708 20:08:18.947230    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:08:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:08:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:08:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:08:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:08:57.651968   33207 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19195-5988/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-511021 -n ha-511021
helpers_test.go:261: (dbg) Run:  kubectl --context ha-511021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (402.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 stop -v=7 --alsologtostderr
E0708 20:09:23.843573   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 stop -v=7 --alsologtostderr: exit status 82 (2m0.470289253s)

                                                
                                                
-- stdout --
	* Stopping node "ha-511021-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:09:17.713771   33615 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:09:17.714020   33615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:09:17.714050   33615 out.go:304] Setting ErrFile to fd 2...
	I0708 20:09:17.714068   33615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:09:17.714662   33615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:09:17.714920   33615 out.go:298] Setting JSON to false
	I0708 20:09:17.715002   33615 mustload.go:65] Loading cluster: ha-511021
	I0708 20:09:17.715341   33615 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:09:17.715421   33615 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 20:09:17.715629   33615 mustload.go:65] Loading cluster: ha-511021
	I0708 20:09:17.715757   33615 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:09:17.715779   33615 stop.go:39] StopHost: ha-511021-m04
	I0708 20:09:17.716120   33615 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:09:17.716171   33615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:09:17.730913   33615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46613
	I0708 20:09:17.731350   33615 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:09:17.731949   33615 main.go:141] libmachine: Using API Version  1
	I0708 20:09:17.731965   33615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:09:17.732376   33615 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:09:17.734631   33615 out.go:177] * Stopping node "ha-511021-m04"  ...
	I0708 20:09:17.736488   33615 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0708 20:09:17.736539   33615 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:09:17.736792   33615 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0708 20:09:17.736815   33615 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:09:17.739808   33615 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:09:17.740209   33615 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 21:08:44 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:09:17.740231   33615 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:09:17.740396   33615 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:09:17.740552   33615 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:09:17.740735   33615 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:09:17.740874   33615 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	I0708 20:09:17.822073   33615 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0708 20:09:17.875482   33615 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0708 20:09:17.930889   33615 main.go:141] libmachine: Stopping "ha-511021-m04"...
	I0708 20:09:17.930923   33615 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:09:17.932699   33615 main.go:141] libmachine: (ha-511021-m04) Calling .Stop
	I0708 20:09:17.936067   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 0/120
	I0708 20:09:18.937626   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 1/120
	I0708 20:09:19.939089   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 2/120
	I0708 20:09:20.940465   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 3/120
	I0708 20:09:21.941869   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 4/120
	I0708 20:09:22.943186   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 5/120
	I0708 20:09:23.944445   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 6/120
	I0708 20:09:24.945956   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 7/120
	I0708 20:09:25.947392   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 8/120
	I0708 20:09:26.948708   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 9/120
	I0708 20:09:27.950571   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 10/120
	I0708 20:09:28.951860   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 11/120
	I0708 20:09:29.953250   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 12/120
	I0708 20:09:30.954832   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 13/120
	I0708 20:09:31.956331   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 14/120
	I0708 20:09:32.958190   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 15/120
	I0708 20:09:33.959578   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 16/120
	I0708 20:09:34.961882   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 17/120
	I0708 20:09:35.963162   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 18/120
	I0708 20:09:36.964438   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 19/120
	I0708 20:09:37.966486   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 20/120
	I0708 20:09:38.967947   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 21/120
	I0708 20:09:39.970071   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 22/120
	I0708 20:09:40.971305   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 23/120
	I0708 20:09:41.972843   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 24/120
	I0708 20:09:42.974682   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 25/120
	I0708 20:09:43.976039   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 26/120
	I0708 20:09:44.978179   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 27/120
	I0708 20:09:45.979791   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 28/120
	I0708 20:09:46.982080   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 29/120
	I0708 20:09:47.984229   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 30/120
	I0708 20:09:48.985492   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 31/120
	I0708 20:09:49.986869   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 32/120
	I0708 20:09:50.988311   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 33/120
	I0708 20:09:51.989765   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 34/120
	I0708 20:09:52.991755   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 35/120
	I0708 20:09:53.993873   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 36/120
	I0708 20:09:54.995328   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 37/120
	I0708 20:09:55.996610   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 38/120
	I0708 20:09:56.998106   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 39/120
	I0708 20:09:58.000387   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 40/120
	I0708 20:09:59.001751   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 41/120
	I0708 20:10:00.003357   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 42/120
	I0708 20:10:01.004756   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 43/120
	I0708 20:10:02.006054   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 44/120
	I0708 20:10:03.007740   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 45/120
	I0708 20:10:04.010029   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 46/120
	I0708 20:10:05.011807   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 47/120
	I0708 20:10:06.013878   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 48/120
	I0708 20:10:07.015352   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 49/120
	I0708 20:10:08.017418   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 50/120
	I0708 20:10:09.018749   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 51/120
	I0708 20:10:10.020764   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 52/120
	I0708 20:10:11.022189   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 53/120
	I0708 20:10:12.023710   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 54/120
	I0708 20:10:13.025375   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 55/120
	I0708 20:10:14.026716   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 56/120
	I0708 20:10:15.027982   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 57/120
	I0708 20:10:16.029227   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 58/120
	I0708 20:10:17.030560   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 59/120
	I0708 20:10:18.032918   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 60/120
	I0708 20:10:19.034435   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 61/120
	I0708 20:10:20.035872   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 62/120
	I0708 20:10:21.037228   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 63/120
	I0708 20:10:22.038455   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 64/120
	I0708 20:10:23.040161   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 65/120
	I0708 20:10:24.041448   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 66/120
	I0708 20:10:25.042888   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 67/120
	I0708 20:10:26.044165   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 68/120
	I0708 20:10:27.046063   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 69/120
	I0708 20:10:28.048409   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 70/120
	I0708 20:10:29.049801   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 71/120
	I0708 20:10:30.050992   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 72/120
	I0708 20:10:31.053022   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 73/120
	I0708 20:10:32.054391   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 74/120
	I0708 20:10:33.056420   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 75/120
	I0708 20:10:34.058025   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 76/120
	I0708 20:10:35.059408   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 77/120
	I0708 20:10:36.060648   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 78/120
	I0708 20:10:37.061878   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 79/120
	I0708 20:10:38.064012   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 80/120
	I0708 20:10:39.066404   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 81/120
	I0708 20:10:40.067723   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 82/120
	I0708 20:10:41.070318   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 83/120
	I0708 20:10:42.071872   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 84/120
	I0708 20:10:43.073689   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 85/120
	I0708 20:10:44.075059   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 86/120
	I0708 20:10:45.076611   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 87/120
	I0708 20:10:46.078041   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 88/120
	I0708 20:10:47.079858   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 89/120
	I0708 20:10:48.081993   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 90/120
	I0708 20:10:49.083694   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 91/120
	I0708 20:10:50.086144   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 92/120
	I0708 20:10:51.087481   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 93/120
	I0708 20:10:52.088722   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 94/120
	I0708 20:10:53.090112   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 95/120
	I0708 20:10:54.091785   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 96/120
	I0708 20:10:55.093978   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 97/120
	I0708 20:10:56.095465   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 98/120
	I0708 20:10:57.096872   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 99/120
	I0708 20:10:58.099299   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 100/120
	I0708 20:10:59.101825   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 101/120
	I0708 20:11:00.103402   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 102/120
	I0708 20:11:01.105052   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 103/120
	I0708 20:11:02.106579   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 104/120
	I0708 20:11:03.108613   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 105/120
	I0708 20:11:04.110104   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 106/120
	I0708 20:11:05.111849   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 107/120
	I0708 20:11:06.113880   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 108/120
	I0708 20:11:07.115182   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 109/120
	I0708 20:11:08.117303   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 110/120
	I0708 20:11:09.118543   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 111/120
	I0708 20:11:10.119882   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 112/120
	I0708 20:11:11.121352   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 113/120
	I0708 20:11:12.122717   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 114/120
	I0708 20:11:13.124598   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 115/120
	I0708 20:11:14.125946   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 116/120
	I0708 20:11:15.127521   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 117/120
	I0708 20:11:16.129846   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 118/120
	I0708 20:11:17.131360   33615 main.go:141] libmachine: (ha-511021-m04) Waiting for machine to stop 119/120
	I0708 20:11:18.132154   33615 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0708 20:11:18.132204   33615 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0708 20:11:18.134341   33615 out.go:177] 
	W0708 20:11:18.135843   33615 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0708 20:11:18.135861   33615 out.go:239] * 
	* 
	W0708 20:11:18.138631   33615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:11:18.139816   33615 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-511021 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
E0708 20:11:29.732902   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr: exit status 3 (18.95064179s)

                                                
                                                
-- stdout --
	ha-511021
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511021-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:11:18.185136   34037 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:11:18.185250   34037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:11:18.185259   34037 out.go:304] Setting ErrFile to fd 2...
	I0708 20:11:18.185264   34037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:11:18.185461   34037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:11:18.185642   34037 out.go:298] Setting JSON to false
	I0708 20:11:18.185673   34037 mustload.go:65] Loading cluster: ha-511021
	I0708 20:11:18.185729   34037 notify.go:220] Checking for updates...
	I0708 20:11:18.186197   34037 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:11:18.186220   34037 status.go:255] checking status of ha-511021 ...
	I0708 20:11:18.186682   34037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:11:18.186754   34037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:11:18.204611   34037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I0708 20:11:18.205053   34037 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:11:18.205597   34037 main.go:141] libmachine: Using API Version  1
	I0708 20:11:18.205637   34037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:11:18.206034   34037 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:11:18.206236   34037 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:11:18.207955   34037 status.go:330] ha-511021 host status = "Running" (err=<nil>)
	I0708 20:11:18.207969   34037 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:11:18.208262   34037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:11:18.208299   34037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:11:18.223527   34037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I0708 20:11:18.223895   34037 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:11:18.224336   34037 main.go:141] libmachine: Using API Version  1
	I0708 20:11:18.224355   34037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:11:18.224628   34037 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:11:18.224796   34037 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:11:18.227856   34037 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:11:18.228258   34037 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:11:18.228281   34037 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:11:18.228516   34037 host.go:66] Checking if "ha-511021" exists ...
	I0708 20:11:18.228811   34037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:11:18.228858   34037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:11:18.244224   34037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42375
	I0708 20:11:18.244623   34037 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:11:18.245038   34037 main.go:141] libmachine: Using API Version  1
	I0708 20:11:18.245067   34037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:11:18.245306   34037 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:11:18.245479   34037 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:11:18.245651   34037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:11:18.245688   34037 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:11:18.248343   34037 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:11:18.248674   34037 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:11:18.248698   34037 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:11:18.248813   34037 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:11:18.248976   34037 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:11:18.249220   34037 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:11:18.249328   34037 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:11:18.335614   34037 ssh_runner.go:195] Run: systemctl --version
	I0708 20:11:18.344360   34037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:11:18.362949   34037 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:11:18.362979   34037 api_server.go:166] Checking apiserver status ...
	I0708 20:11:18.363016   34037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:11:18.379913   34037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5107/cgroup
	W0708 20:11:18.390829   34037 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5107/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:11:18.390891   34037 ssh_runner.go:195] Run: ls
	I0708 20:11:18.398147   34037 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:11:18.402509   34037 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:11:18.402531   34037 status.go:422] ha-511021 apiserver status = Running (err=<nil>)
	I0708 20:11:18.402543   34037 status.go:257] ha-511021 status: &{Name:ha-511021 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:11:18.402573   34037 status.go:255] checking status of ha-511021-m02 ...
	I0708 20:11:18.402964   34037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:11:18.403005   34037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:11:18.418663   34037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I0708 20:11:18.419065   34037 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:11:18.419628   34037 main.go:141] libmachine: Using API Version  1
	I0708 20:11:18.419648   34037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:11:18.420025   34037 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:11:18.420211   34037 main.go:141] libmachine: (ha-511021-m02) Calling .GetState
	I0708 20:11:18.421766   34037 status.go:330] ha-511021-m02 host status = "Running" (err=<nil>)
	I0708 20:11:18.421776   34037 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:11:18.422108   34037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:11:18.422150   34037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:11:18.437221   34037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0708 20:11:18.437618   34037 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:11:18.438132   34037 main.go:141] libmachine: Using API Version  1
	I0708 20:11:18.438160   34037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:11:18.438487   34037 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:11:18.438676   34037 main.go:141] libmachine: (ha-511021-m02) Calling .GetIP
	I0708 20:11:18.441548   34037 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:11:18.441955   34037 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 21:06:11 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:11:18.441980   34037 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:11:18.442113   34037 host.go:66] Checking if "ha-511021-m02" exists ...
	I0708 20:11:18.442413   34037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:11:18.442457   34037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:11:18.457067   34037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33391
	I0708 20:11:18.457533   34037 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:11:18.457999   34037 main.go:141] libmachine: Using API Version  1
	I0708 20:11:18.458016   34037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:11:18.458405   34037 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:11:18.458555   34037 main.go:141] libmachine: (ha-511021-m02) Calling .DriverName
	I0708 20:11:18.458740   34037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:11:18.458757   34037 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHHostname
	I0708 20:11:18.461665   34037 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:11:18.462105   34037 main.go:141] libmachine: (ha-511021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:dd:87", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 21:06:11 +0000 UTC Type:0 Mac:52:54:00:e2:dd:87 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-511021-m02 Clientid:01:52:54:00:e2:dd:87}
	I0708 20:11:18.462130   34037 main.go:141] libmachine: (ha-511021-m02) DBG | domain ha-511021-m02 has defined IP address 192.168.39.216 and MAC address 52:54:00:e2:dd:87 in network mk-ha-511021
	I0708 20:11:18.462253   34037 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHPort
	I0708 20:11:18.462417   34037 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHKeyPath
	I0708 20:11:18.462561   34037 main.go:141] libmachine: (ha-511021-m02) Calling .GetSSHUsername
	I0708 20:11:18.462684   34037 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m02/id_rsa Username:docker}
	I0708 20:11:18.549822   34037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:11:18.568499   34037 kubeconfig.go:125] found "ha-511021" server: "https://192.168.39.254:8443"
	I0708 20:11:18.568530   34037 api_server.go:166] Checking apiserver status ...
	I0708 20:11:18.568566   34037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:11:18.588512   34037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup
	W0708 20:11:18.599020   34037 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:11:18.599069   34037 ssh_runner.go:195] Run: ls
	I0708 20:11:18.603806   34037 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0708 20:11:18.607966   34037 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0708 20:11:18.607991   34037 status.go:422] ha-511021-m02 apiserver status = Running (err=<nil>)
	I0708 20:11:18.608001   34037 status.go:257] ha-511021-m02 status: &{Name:ha-511021-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:11:18.608019   34037 status.go:255] checking status of ha-511021-m04 ...
	I0708 20:11:18.608410   34037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:11:18.608457   34037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:11:18.623523   34037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0708 20:11:18.623895   34037 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:11:18.624353   34037 main.go:141] libmachine: Using API Version  1
	I0708 20:11:18.624373   34037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:11:18.624656   34037 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:11:18.624834   34037 main.go:141] libmachine: (ha-511021-m04) Calling .GetState
	I0708 20:11:18.626221   34037 status.go:330] ha-511021-m04 host status = "Running" (err=<nil>)
	I0708 20:11:18.626235   34037 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:11:18.626499   34037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:11:18.626543   34037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:11:18.641262   34037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0708 20:11:18.641733   34037 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:11:18.642355   34037 main.go:141] libmachine: Using API Version  1
	I0708 20:11:18.642377   34037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:11:18.642656   34037 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:11:18.642842   34037 main.go:141] libmachine: (ha-511021-m04) Calling .GetIP
	I0708 20:11:18.645464   34037 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:11:18.645831   34037 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 21:08:44 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:11:18.645862   34037 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:11:18.645988   34037 host.go:66] Checking if "ha-511021-m04" exists ...
	I0708 20:11:18.646372   34037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:11:18.646414   34037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:11:18.662584   34037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40543
	I0708 20:11:18.662996   34037 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:11:18.663445   34037 main.go:141] libmachine: Using API Version  1
	I0708 20:11:18.663489   34037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:11:18.663857   34037 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:11:18.664065   34037 main.go:141] libmachine: (ha-511021-m04) Calling .DriverName
	I0708 20:11:18.664274   34037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:11:18.664294   34037 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHHostname
	I0708 20:11:18.666992   34037 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:11:18.667414   34037 main.go:141] libmachine: (ha-511021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:2c:f7", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 21:08:44 +0000 UTC Type:0 Mac:52:54:00:be:2c:f7 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-511021-m04 Clientid:01:52:54:00:be:2c:f7}
	I0708 20:11:18.667438   34037 main.go:141] libmachine: (ha-511021-m04) DBG | domain ha-511021-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:be:2c:f7 in network mk-ha-511021
	I0708 20:11:18.667634   34037 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHPort
	I0708 20:11:18.667806   34037 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHKeyPath
	I0708 20:11:18.667956   34037 main.go:141] libmachine: (ha-511021-m04) Calling .GetSSHUsername
	I0708 20:11:18.668069   34037 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021-m04/id_rsa Username:docker}
	W0708 20:11:37.091718   34037 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.205:22: connect: no route to host
	W0708 20:11:37.091818   34037 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	E0708 20:11:37.091841   34037 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	I0708 20:11:37.091851   34037 status.go:257] ha-511021-m04 status: &{Name:ha-511021-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0708 20:11:37.091884   34037 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-511021 -n ha-511021
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-511021 logs -n 25: (1.696796735s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-511021 ssh -n ha-511021-m02 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /home/docker/cp-test_ha-511021-m03_ha-511021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m04:/home/docker/cp-test_ha-511021-m03_ha-511021-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | ha-511021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m04 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:58 UTC |
	|         | /home/docker/cp-test_ha-511021-m03_ha-511021-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp testdata/cp-test.txt                                                | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:58 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3985602198/001/cp-test_ha-511021-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021:/home/docker/cp-test_ha-511021-m04_ha-511021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021 sudo cat                                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m02:/home/docker/cp-test_ha-511021-m04_ha-511021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m02 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m03:/home/docker/cp-test_ha-511021-m04_ha-511021-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n                                                                 | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | ha-511021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-511021 ssh -n ha-511021-m03 sudo cat                                          | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC | 08 Jul 24 19:59 UTC |
	|         | /home/docker/cp-test_ha-511021-m04_ha-511021-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-511021 node stop m02 -v=7                                                     | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 19:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-511021 node start m02 -v=7                                                    | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-511021 -v=7                                                           | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-511021 -v=7                                                                | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-511021 --wait=true -v=7                                                    | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:04 UTC | 08 Jul 24 20:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-511021                                                                | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:08 UTC |                     |
	| node    | ha-511021 node delete m03 -v=7                                                   | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:08 UTC | 08 Jul 24 20:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-511021 stop -v=7                                                              | ha-511021 | jenkins | v1.33.1 | 08 Jul 24 20:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:04:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:04:19.433891   31820 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:04:19.434119   31820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:04:19.434130   31820 out.go:304] Setting ErrFile to fd 2...
	I0708 20:04:19.434135   31820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:04:19.434313   31820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:04:19.434835   31820 out.go:298] Setting JSON to false
	I0708 20:04:19.435748   31820 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2808,"bootTime":1720466251,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:04:19.435808   31820 start.go:139] virtualization: kvm guest
	I0708 20:04:19.438977   31820 out.go:177] * [ha-511021] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:04:19.440567   31820 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:04:19.440572   31820 notify.go:220] Checking for updates...
	I0708 20:04:19.442375   31820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:04:19.443971   31820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:04:19.445439   31820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:04:19.446678   31820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:04:19.448014   31820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:04:19.449601   31820 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:04:19.449687   31820 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:04:19.450116   31820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:04:19.450166   31820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:04:19.465702   31820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0708 20:04:19.466129   31820 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:04:19.466637   31820 main.go:141] libmachine: Using API Version  1
	I0708 20:04:19.466661   31820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:04:19.467039   31820 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:04:19.467215   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:04:19.504051   31820 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:04:19.505519   31820 start.go:297] selected driver: kvm2
	I0708 20:04:19.505533   31820 start.go:901] validating driver "kvm2" against &{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.205 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:04:19.505732   31820 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:04:19.506179   31820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:04:19.506252   31820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:04:19.521503   31820 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:04:19.522246   31820 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:04:19.522324   31820 cni.go:84] Creating CNI manager for ""
	I0708 20:04:19.522337   31820 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0708 20:04:19.522424   31820 start.go:340] cluster config:
	{Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.205 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:04:19.522566   31820 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:04:19.524478   31820 out.go:177] * Starting "ha-511021" primary control-plane node in "ha-511021" cluster
	I0708 20:04:19.525717   31820 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:04:19.525747   31820 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 20:04:19.525757   31820 cache.go:56] Caching tarball of preloaded images
	I0708 20:04:19.525832   31820 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:04:19.525844   31820 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 20:04:19.525956   31820 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/config.json ...
	I0708 20:04:19.526133   31820 start.go:360] acquireMachinesLock for ha-511021: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:04:19.526169   31820 start.go:364] duration metric: took 19.997µs to acquireMachinesLock for "ha-511021"
	I0708 20:04:19.526182   31820 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:04:19.526193   31820 fix.go:54] fixHost starting: 
	I0708 20:04:19.526435   31820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:04:19.526463   31820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:04:19.541137   31820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0708 20:04:19.541532   31820 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:04:19.542033   31820 main.go:141] libmachine: Using API Version  1
	I0708 20:04:19.542052   31820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:04:19.542369   31820 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:04:19.542542   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:04:19.542706   31820 main.go:141] libmachine: (ha-511021) Calling .GetState
	I0708 20:04:19.544292   31820 fix.go:112] recreateIfNeeded on ha-511021: state=Running err=<nil>
	W0708 20:04:19.544309   31820 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:04:19.546309   31820 out.go:177] * Updating the running kvm2 "ha-511021" VM ...
	I0708 20:04:19.547690   31820 machine.go:94] provisionDockerMachine start ...
	I0708 20:04:19.547710   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:04:19.547959   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:19.550245   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.550621   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:19.550646   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.550810   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:04:19.550990   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.551159   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.551274   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:04:19.551434   31820 main.go:141] libmachine: Using SSH client type: native
	I0708 20:04:19.551647   31820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 20:04:19.551660   31820 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:04:19.666727   31820 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-511021
	
	I0708 20:04:19.666764   31820 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 20:04:19.667062   31820 buildroot.go:166] provisioning hostname "ha-511021"
	I0708 20:04:19.667084   31820 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 20:04:19.667285   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:19.669795   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.670211   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:19.670241   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.670404   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:04:19.670596   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.670736   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.670866   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:04:19.671005   31820 main.go:141] libmachine: Using SSH client type: native
	I0708 20:04:19.671170   31820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 20:04:19.671187   31820 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-511021 && echo "ha-511021" | sudo tee /etc/hostname
	I0708 20:04:19.795732   31820 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-511021
	
	I0708 20:04:19.795767   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:19.798619   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.799001   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:19.799144   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.799211   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:04:19.799400   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.799607   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:19.799735   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:04:19.799884   31820 main.go:141] libmachine: Using SSH client type: native
	I0708 20:04:19.800048   31820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 20:04:19.800063   31820 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-511021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-511021/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-511021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:04:19.912550   31820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:04:19.912577   31820 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:04:19.912604   31820 buildroot.go:174] setting up certificates
	I0708 20:04:19.912612   31820 provision.go:84] configureAuth start
	I0708 20:04:19.912619   31820 main.go:141] libmachine: (ha-511021) Calling .GetMachineName
	I0708 20:04:19.912886   31820 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:04:19.915407   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.915763   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:19.915783   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.916004   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:19.918348   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.918823   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:19.918846   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:19.919004   31820 provision.go:143] copyHostCerts
	I0708 20:04:19.919029   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:04:19.919056   31820 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:04:19.919097   31820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:04:19.919164   31820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:04:19.919255   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:04:19.919272   31820 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:04:19.919282   31820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:04:19.919309   31820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:04:19.919360   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:04:19.919376   31820 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:04:19.919382   31820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:04:19.919401   31820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:04:19.919483   31820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.ha-511021 san=[127.0.0.1 192.168.39.33 ha-511021 localhost minikube]
	I0708 20:04:20.075593   31820 provision.go:177] copyRemoteCerts
	I0708 20:04:20.075652   31820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:04:20.075673   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:20.078518   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:20.078866   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:20.078894   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:20.079035   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:04:20.079216   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:20.079335   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:04:20.079512   31820 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:04:20.169045   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 20:04:20.169115   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:04:20.196182   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 20:04:20.196265   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0708 20:04:20.222424   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 20:04:20.222483   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:04:20.257172   31820 provision.go:87] duration metric: took 344.546164ms to configureAuth
	I0708 20:04:20.257207   31820 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:04:20.257450   31820 config.go:182] Loaded profile config "ha-511021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:04:20.257520   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:04:20.260439   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:20.260857   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:04:20.260885   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:04:20.261077   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:04:20.261304   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:20.261484   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:04:20.261660   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:04:20.261856   31820 main.go:141] libmachine: Using SSH client type: native
	I0708 20:04:20.262034   31820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 20:04:20.262049   31820 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:05:51.087814   31820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:05:51.087843   31820 machine.go:97] duration metric: took 1m31.540138601s to provisionDockerMachine
	I0708 20:05:51.087860   31820 start.go:293] postStartSetup for "ha-511021" (driver="kvm2")
	I0708 20:05:51.087871   31820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:05:51.087887   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.088215   31820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:05:51.088249   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:05:51.091430   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.091926   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.091957   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.092151   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:05:51.092357   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.092529   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:05:51.092693   31820 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:05:51.179683   31820 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:05:51.184280   31820 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:05:51.184307   31820 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:05:51.184368   31820 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:05:51.184463   31820 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:05:51.184476   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /etc/ssl/certs/131412.pem
	I0708 20:05:51.184588   31820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:05:51.194759   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:05:51.219941   31820 start.go:296] duration metric: took 132.066981ms for postStartSetup
	I0708 20:05:51.219984   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.220286   31820 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0708 20:05:51.220320   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:05:51.223247   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.223698   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.223722   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.223940   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:05:51.224125   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.224249   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:05:51.224346   31820 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	W0708 20:05:51.312352   31820 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0708 20:05:51.312379   31820 fix.go:56] duration metric: took 1m31.786189185s for fixHost
	I0708 20:05:51.312400   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:05:51.315061   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.315423   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.315461   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.315720   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:05:51.316014   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.316171   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.316286   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:05:51.316441   31820 main.go:141] libmachine: Using SSH client type: native
	I0708 20:05:51.316595   31820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0708 20:05:51.316605   31820 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:05:51.424426   31820 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720469151.369052026
	
	I0708 20:05:51.424447   31820 fix.go:216] guest clock: 1720469151.369052026
	I0708 20:05:51.424457   31820 fix.go:229] Guest: 2024-07-08 20:05:51.369052026 +0000 UTC Remote: 2024-07-08 20:05:51.312387328 +0000 UTC m=+91.916293259 (delta=56.664698ms)
	I0708 20:05:51.424497   31820 fix.go:200] guest clock delta is within tolerance: 56.664698ms
	I0708 20:05:51.424503   31820 start.go:83] releasing machines lock for "ha-511021", held for 1m31.898325471s
	I0708 20:05:51.424528   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.424775   31820 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:05:51.427789   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.428154   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.428182   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.428353   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.428828   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.428995   31820 main.go:141] libmachine: (ha-511021) Calling .DriverName
	I0708 20:05:51.429091   31820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:05:51.429128   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:05:51.429195   31820 ssh_runner.go:195] Run: cat /version.json
	I0708 20:05:51.429219   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHHostname
	I0708 20:05:51.431850   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.432270   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.432307   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.432325   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.432494   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:05:51.432678   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.432752   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:05:51.432774   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:05:51.432837   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:05:51.432949   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHPort
	I0708 20:05:51.433014   31820 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:05:51.433102   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHKeyPath
	I0708 20:05:51.433244   31820 main.go:141] libmachine: (ha-511021) Calling .GetSSHUsername
	I0708 20:05:51.433371   31820 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/ha-511021/id_rsa Username:docker}
	I0708 20:05:51.512909   31820 ssh_runner.go:195] Run: systemctl --version
	I0708 20:05:51.541129   31820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:05:51.706403   31820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:05:51.718232   31820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:05:51.718290   31820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:05:51.727852   31820 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0708 20:05:51.727880   31820 start.go:494] detecting cgroup driver to use...
	I0708 20:05:51.727940   31820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:05:51.743918   31820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:05:51.758176   31820 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:05:51.758256   31820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:05:51.772317   31820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:05:51.785878   31820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:05:51.937937   31820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:05:52.099749   31820 docker.go:233] disabling docker service ...
	I0708 20:05:52.099818   31820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:05:52.120179   31820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:05:52.134858   31820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:05:52.282618   31820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:05:52.438209   31820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:05:52.452316   31820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:05:52.472171   31820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:05:52.472242   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.483334   31820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:05:52.483412   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.494490   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.505472   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.516573   31820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:05:52.527809   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.538778   31820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.550272   31820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:05:52.561862   31820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:05:52.572250   31820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:05:52.582227   31820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:05:52.731897   31820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:05:59.937325   31820 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.205385672s)
	I0708 20:05:59.937352   31820 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:05:59.937396   31820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:05:59.942900   31820 start.go:562] Will wait 60s for crictl version
	I0708 20:05:59.942959   31820 ssh_runner.go:195] Run: which crictl
	I0708 20:05:59.946832   31820 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:05:59.990094   31820 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:05:59.990186   31820 ssh_runner.go:195] Run: crio --version
	I0708 20:06:00.020049   31820 ssh_runner.go:195] Run: crio --version
	I0708 20:06:00.053548   31820 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:06:00.054767   31820 main.go:141] libmachine: (ha-511021) Calling .GetIP
	I0708 20:06:00.057518   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:06:00.057890   31820 main.go:141] libmachine: (ha-511021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1e:ad", ip: ""} in network mk-ha-511021: {Iface:virbr1 ExpiryTime:2024-07-08 20:54:53 +0000 UTC Type:0 Mac:52:54:00:fe:1e:ad Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-511021 Clientid:01:52:54:00:fe:1e:ad}
	I0708 20:06:00.057914   31820 main.go:141] libmachine: (ha-511021) DBG | domain ha-511021 has defined IP address 192.168.39.33 and MAC address 52:54:00:fe:1e:ad in network mk-ha-511021
	I0708 20:06:00.058127   31820 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:06:00.063257   31820 kubeadm.go:877] updating cluster {Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.205 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:06:00.063438   31820 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:06:00.063511   31820 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:06:00.106872   31820 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:06:00.106894   31820 crio.go:433] Images already preloaded, skipping extraction
	I0708 20:06:00.106940   31820 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:06:00.146952   31820 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:06:00.146978   31820 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:06:00.146987   31820 kubeadm.go:928] updating node { 192.168.39.33 8443 v1.30.2 crio true true} ...
	I0708 20:06:00.147087   31820 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-511021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:06:00.147149   31820 ssh_runner.go:195] Run: crio config
	I0708 20:06:00.203044   31820 cni.go:84] Creating CNI manager for ""
	I0708 20:06:00.203319   31820 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0708 20:06:00.203332   31820 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:06:00.203363   31820 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.33 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-511021 NodeName:ha-511021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:06:00.203547   31820 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-511021"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:06:00.203577   31820 kube-vip.go:115] generating kube-vip config ...
	I0708 20:06:00.203628   31820 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0708 20:06:00.216056   31820 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0708 20:06:00.216179   31820 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0708 20:06:00.216244   31820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:06:00.226088   31820 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:06:00.226159   31820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0708 20:06:00.236923   31820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0708 20:06:00.254962   31820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:06:00.273144   31820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0708 20:06:00.291210   31820 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0708 20:06:00.310081   31820 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0708 20:06:00.314454   31820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:06:00.463339   31820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:06:00.478399   31820 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021 for IP: 192.168.39.33
	I0708 20:06:00.478423   31820 certs.go:194] generating shared ca certs ...
	I0708 20:06:00.478443   31820 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:06:00.478591   31820 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:06:00.478640   31820 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:06:00.478648   31820 certs.go:256] generating profile certs ...
	I0708 20:06:00.478728   31820 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/client.key
	I0708 20:06:00.478759   31820 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a35ec44e
	I0708 20:06:00.478775   31820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a35ec44e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.33 192.168.39.216 192.168.39.70 192.168.39.254]
	I0708 20:06:00.571186   31820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a35ec44e ...
	I0708 20:06:00.571218   31820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a35ec44e: {Name:mk238071fcb109f666cf0ada333a915684a72d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:06:00.571386   31820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a35ec44e ...
	I0708 20:06:00.571396   31820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a35ec44e: {Name:mkf31cd3a0fa10858e99ac8972f3ab7373aa3fc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:06:00.571486   31820 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt.a35ec44e -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt
	I0708 20:06:00.571618   31820 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key.a35ec44e -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key
	I0708 20:06:00.571748   31820 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key
	I0708 20:06:00.571767   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 20:06:00.571782   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 20:06:00.571799   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 20:06:00.571812   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 20:06:00.571822   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 20:06:00.571833   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 20:06:00.571844   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 20:06:00.571857   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 20:06:00.571914   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:06:00.571944   31820 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:06:00.571952   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:06:00.571972   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:06:00.571995   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:06:00.572015   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:06:00.572050   31820 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:06:00.572074   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:06:00.572088   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem -> /usr/share/ca-certificates/13141.pem
	I0708 20:06:00.572100   31820 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /usr/share/ca-certificates/131412.pem
	I0708 20:06:00.572665   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:06:00.599701   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:06:00.624931   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:06:00.649461   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:06:00.674557   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0708 20:06:00.699217   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 20:06:00.723772   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:06:00.749601   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/ha-511021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:06:00.775906   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:06:00.802649   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:06:00.829779   31820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:06:00.858623   31820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:06:00.877541   31820 ssh_runner.go:195] Run: openssl version
	I0708 20:06:00.884120   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:06:00.895954   31820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:06:00.901215   31820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:06:00.901283   31820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:06:00.907534   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:06:00.918094   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:06:00.931616   31820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:06:00.937167   31820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:06:00.937236   31820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:06:00.943312   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:06:00.953330   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:06:00.964710   31820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:06:00.970009   31820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:06:00.970089   31820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:06:00.976408   31820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:06:00.987139   31820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:06:00.992480   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:06:00.998585   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:06:01.005031   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:06:01.011228   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:06:01.017338   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:06:01.023844   31820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:06:01.030120   31820 kubeadm.go:391] StartCluster: {Name:ha-511021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-511021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.70 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.205 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:06:01.030228   31820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:06:01.030294   31820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:06:01.081010   31820 cri.go:89] found id: "07b1e06f2165b9a75c0179c4493d83cdf879cdcfbc5962391d44f2a78f573e14"
	I0708 20:06:01.081032   31820 cri.go:89] found id: "10819bc348798228cb925cfd626dd580cd269711d9fb52b5386026c657c7a2c5"
	I0708 20:06:01.081036   31820 cri.go:89] found id: "08da972caef161c88bc90649163dc4eaaa5cc7a0a9f60dd1e9f124634d88a270"
	I0708 20:06:01.081039   31820 cri.go:89] found id: "693a49012ffbe0f1af1ebb92fcad97b83ab34e0d244582a1e7ad6e2a12e4698a"
	I0708 20:06:01.081043   31820 cri.go:89] found id: "6e2b3c8d333ac8c5ad3ee8d4a9f8ff6fbb41287e55928605d7d49ae153738db2"
	I0708 20:06:01.081047   31820 cri.go:89] found id: "6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa"
	I0708 20:06:01.081051   31820 cri.go:89] found id: "499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7"
	I0708 20:06:01.081055   31820 cri.go:89] found id: "ef250a5d2c6701c36dbb63dc1494bd02a11629e58b9b6ad5ab4a0585f444dbe9"
	I0708 20:06:01.081059   31820 cri.go:89] found id: "67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19"
	I0708 20:06:01.081066   31820 cri.go:89] found id: "dd8ad312a5acddb79be337823087ee2b87d36262359d11cd3661e4a31d3026ec"
	I0708 20:06:01.081070   31820 cri.go:89] found id: "08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e"
	I0708 20:06:01.081075   31820 cri.go:89] found id: "0ed1c59e04eb8e9c5a9503853a55dd8185bbd443c359ce6d37d9f0c062505e67"
	I0708 20:06:01.081079   31820 cri.go:89] found id: "019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9"
	I0708 20:06:01.081083   31820 cri.go:89] found id: "e4326cf8a34b61a7baf29d68ba8e1b5c1c5f72972d74e1a73df5303f1cef7586"
	I0708 20:06:01.081088   31820 cri.go:89] found id: ""
	I0708 20:06:01.081128   31820 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.706765550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720469497706739402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f9a8935-2768-4137-bf79-c8cdbe913a34 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.707479172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=759820dc-d34d-4120-87a1-eedadc0005fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.707555590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=759820dc-d34d-4120-87a1-eedadc0005fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.708018289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e9a087907d7c028ca0b7d30efd5d52a3aa4d4ec1c01d4694ce9f29a6ccff49,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720469257887325435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a092bcfc2c4cf52b3a7a13ad5de69f2705f9f47507b1ff3c846fd063dc62b0e,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720469225892720085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad6fd7c3f9cad31104529097c8feeb16ff0c5ce58c2ed27a50b3743232c0bc5,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720469210892212686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb8ddfc4919dff163e345f60e168e06f35c9d2988df41561e920c4448bd8fed,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720469208901888135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec27cea09fe4b6c1702ac07555fb0dc3e8a50de265f5516597a359c8e5efa4e,PodSandboxId:f97c5267622e6708415275ef934c949e657a2f8147e4826cab37b534dc64d8e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720469200285006902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3fc8ef1e0299d99ef60bf4fbeae19194d5c36940ec08ad10e6ce0ce357c232,PodSandboxId:f586e1626531019f80ebbd1a8ced37f08e948582fa0190d1ec4231539ca1986b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720469178439433586,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29a9ed466df566b5a45a87a004582e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6dea00926f165df26d06c6421a15f2c6f0124a7ee17dcff8893fa517b3e434a7,PodSandboxId:1c8757727c0796600d9c33cd7b1d60eb582f2a8a8d270a0a592c16485c6b1184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720469167055477055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:80d3a01323446653a7398eff9a324e1447553ba76ff841a403de2c956bcfd4ba,PodSandboxId:15bfe51f73f1d04fcb452c2b9823a6053077a02ca13b7b7df4a96dbe1c4bf4d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469167160050965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720469167067135118,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720469166796838413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f47fb0f400e915295b2ec21e227b8000e1936d00aa1e9265345bcf18da00776,PodSandboxId:fe17ace71d58c0de7ba910b637efe0025726e7dccf5a0e662e230cc9592510be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469166944613398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2303835cb3470ac48e1c2f7eeacbd0c55e180b7acf710d2929e5f1f7c987570,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720469166990626313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f3
82d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c97c9bf4b2ba515d9c57ff1ad82fdc07c3fa398efe0f30e200eeb4afa9b8b6d,PodSandboxId:933bc29f90e9808f147ef51a67f08bb1bc76f5e51a4380d2fe5089323f512648,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720469166802683027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string
]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59de9e6a107817af76862bda008f35a5bdbc9c446829a20e23b865829f0e4faa,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720469166736092660,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bde8b17ea0c0a6fdba42f0b205c7d9bcbc19c9c1b529fc4a8f65bd2e6c9c994,PodSandboxId:93bc3377fc8a32869f1698a0c90a2260b2d53df153fb28fe29e1ab8bebe272dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720469166726621718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720468678300626732,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernet
es.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535991336335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535981042931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f
6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720468532672426940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d76
91a75a899,State:CONTAINER_EXITED,CreatedAt:1720468512224119753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt
:1720468512188864425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=759820dc-d34d-4120-87a1-eedadc0005fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.754999112Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d39700e-8898-48f0-8c30-bb9a1691f7d7 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.755089879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d39700e-8898-48f0-8c30-bb9a1691f7d7 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.756258129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4511aa16-c370-465d-bea0-80bbae3479c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.756688555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720469497756666489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4511aa16-c370-465d-bea0-80bbae3479c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.757216623Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b830a48c-978b-485d-a544-9e453a486017 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.757319683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b830a48c-978b-485d-a544-9e453a486017 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.757773613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e9a087907d7c028ca0b7d30efd5d52a3aa4d4ec1c01d4694ce9f29a6ccff49,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720469257887325435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a092bcfc2c4cf52b3a7a13ad5de69f2705f9f47507b1ff3c846fd063dc62b0e,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720469225892720085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad6fd7c3f9cad31104529097c8feeb16ff0c5ce58c2ed27a50b3743232c0bc5,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720469210892212686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb8ddfc4919dff163e345f60e168e06f35c9d2988df41561e920c4448bd8fed,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720469208901888135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec27cea09fe4b6c1702ac07555fb0dc3e8a50de265f5516597a359c8e5efa4e,PodSandboxId:f97c5267622e6708415275ef934c949e657a2f8147e4826cab37b534dc64d8e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720469200285006902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3fc8ef1e0299d99ef60bf4fbeae19194d5c36940ec08ad10e6ce0ce357c232,PodSandboxId:f586e1626531019f80ebbd1a8ced37f08e948582fa0190d1ec4231539ca1986b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720469178439433586,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29a9ed466df566b5a45a87a004582e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6dea00926f165df26d06c6421a15f2c6f0124a7ee17dcff8893fa517b3e434a7,PodSandboxId:1c8757727c0796600d9c33cd7b1d60eb582f2a8a8d270a0a592c16485c6b1184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720469167055477055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:80d3a01323446653a7398eff9a324e1447553ba76ff841a403de2c956bcfd4ba,PodSandboxId:15bfe51f73f1d04fcb452c2b9823a6053077a02ca13b7b7df4a96dbe1c4bf4d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469167160050965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720469167067135118,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720469166796838413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f47fb0f400e915295b2ec21e227b8000e1936d00aa1e9265345bcf18da00776,PodSandboxId:fe17ace71d58c0de7ba910b637efe0025726e7dccf5a0e662e230cc9592510be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469166944613398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2303835cb3470ac48e1c2f7eeacbd0c55e180b7acf710d2929e5f1f7c987570,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720469166990626313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f3
82d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c97c9bf4b2ba515d9c57ff1ad82fdc07c3fa398efe0f30e200eeb4afa9b8b6d,PodSandboxId:933bc29f90e9808f147ef51a67f08bb1bc76f5e51a4380d2fe5089323f512648,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720469166802683027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string
]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59de9e6a107817af76862bda008f35a5bdbc9c446829a20e23b865829f0e4faa,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720469166736092660,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bde8b17ea0c0a6fdba42f0b205c7d9bcbc19c9c1b529fc4a8f65bd2e6c9c994,PodSandboxId:93bc3377fc8a32869f1698a0c90a2260b2d53df153fb28fe29e1ab8bebe272dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720469166726621718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720468678300626732,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernet
es.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535991336335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535981042931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f
6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720468532672426940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d76
91a75a899,State:CONTAINER_EXITED,CreatedAt:1720468512224119753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt
:1720468512188864425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b830a48c-978b-485d-a544-9e453a486017 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.804160952Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=137aae97-58a9-4f44-9ba5-1145fc61d81a name=/runtime.v1.RuntimeService/Version
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.804284220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=137aae97-58a9-4f44-9ba5-1145fc61d81a name=/runtime.v1.RuntimeService/Version
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.805310548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d37d8f9f-5bdb-45fd-b3ea-42ed19c47d2b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.805747038Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720469497805727657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d37d8f9f-5bdb-45fd-b3ea-42ed19c47d2b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.806388696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17f4682a-984f-44ef-9bbf-d5c3f3187348 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.806466703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17f4682a-984f-44ef-9bbf-d5c3f3187348 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.807059690Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e9a087907d7c028ca0b7d30efd5d52a3aa4d4ec1c01d4694ce9f29a6ccff49,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720469257887325435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a092bcfc2c4cf52b3a7a13ad5de69f2705f9f47507b1ff3c846fd063dc62b0e,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720469225892720085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad6fd7c3f9cad31104529097c8feeb16ff0c5ce58c2ed27a50b3743232c0bc5,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720469210892212686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb8ddfc4919dff163e345f60e168e06f35c9d2988df41561e920c4448bd8fed,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720469208901888135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec27cea09fe4b6c1702ac07555fb0dc3e8a50de265f5516597a359c8e5efa4e,PodSandboxId:f97c5267622e6708415275ef934c949e657a2f8147e4826cab37b534dc64d8e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720469200285006902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3fc8ef1e0299d99ef60bf4fbeae19194d5c36940ec08ad10e6ce0ce357c232,PodSandboxId:f586e1626531019f80ebbd1a8ced37f08e948582fa0190d1ec4231539ca1986b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720469178439433586,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29a9ed466df566b5a45a87a004582e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6dea00926f165df26d06c6421a15f2c6f0124a7ee17dcff8893fa517b3e434a7,PodSandboxId:1c8757727c0796600d9c33cd7b1d60eb582f2a8a8d270a0a592c16485c6b1184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720469167055477055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:80d3a01323446653a7398eff9a324e1447553ba76ff841a403de2c956bcfd4ba,PodSandboxId:15bfe51f73f1d04fcb452c2b9823a6053077a02ca13b7b7df4a96dbe1c4bf4d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469167160050965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720469167067135118,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720469166796838413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f47fb0f400e915295b2ec21e227b8000e1936d00aa1e9265345bcf18da00776,PodSandboxId:fe17ace71d58c0de7ba910b637efe0025726e7dccf5a0e662e230cc9592510be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469166944613398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2303835cb3470ac48e1c2f7eeacbd0c55e180b7acf710d2929e5f1f7c987570,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720469166990626313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f3
82d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c97c9bf4b2ba515d9c57ff1ad82fdc07c3fa398efe0f30e200eeb4afa9b8b6d,PodSandboxId:933bc29f90e9808f147ef51a67f08bb1bc76f5e51a4380d2fe5089323f512648,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720469166802683027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string
]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59de9e6a107817af76862bda008f35a5bdbc9c446829a20e23b865829f0e4faa,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720469166736092660,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bde8b17ea0c0a6fdba42f0b205c7d9bcbc19c9c1b529fc4a8f65bd2e6c9c994,PodSandboxId:93bc3377fc8a32869f1698a0c90a2260b2d53df153fb28fe29e1ab8bebe272dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720469166726621718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720468678300626732,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernet
es.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535991336335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535981042931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f
6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720468532672426940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d76
91a75a899,State:CONTAINER_EXITED,CreatedAt:1720468512224119753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt
:1720468512188864425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17f4682a-984f-44ef-9bbf-d5c3f3187348 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.848347454Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3f4b261-aee4-4a5d-9b4d-d614bce102c3 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.848471441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3f4b261-aee4-4a5d-9b4d-d614bce102c3 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.850250652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab44ad11-eaba-47f5-9359-82eb4e0fe348 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.851197316Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720469497851110892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab44ad11-eaba-47f5-9359-82eb4e0fe348 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.852464791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=baa84b06-9249-47cc-a282-d26f20f5229f name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.852521789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=baa84b06-9249-47cc-a282-d26f20f5229f name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:11:37 ha-511021 crio[3868]: time="2024-07-08 20:11:37.853020508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7e9a087907d7c028ca0b7d30efd5d52a3aa4d4ec1c01d4694ce9f29a6ccff49,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720469257887325435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a092bcfc2c4cf52b3a7a13ad5de69f2705f9f47507b1ff3c846fd063dc62b0e,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720469225892720085,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad6fd7c3f9cad31104529097c8feeb16ff0c5ce58c2ed27a50b3743232c0bc5,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720469210892212686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb8ddfc4919dff163e345f60e168e06f35c9d2988df41561e920c4448bd8fed,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720469208901888135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f382d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec27cea09fe4b6c1702ac07555fb0dc3e8a50de265f5516597a359c8e5efa4e,PodSandboxId:f97c5267622e6708415275ef934c949e657a2f8147e4826cab37b534dc64d8e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720469200285006902,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernetes.container.hash: bb0edd48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3fc8ef1e0299d99ef60bf4fbeae19194d5c36940ec08ad10e6ce0ce357c232,PodSandboxId:f586e1626531019f80ebbd1a8ced37f08e948582fa0190d1ec4231539ca1986b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1720469178439433586,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29a9ed466df566b5a45a87a004582e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:6dea00926f165df26d06c6421a15f2c6f0124a7ee17dcff8893fa517b3e434a7,PodSandboxId:1c8757727c0796600d9c33cd7b1d60eb582f2a8a8d270a0a592c16485c6b1184,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720469167055477055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:80d3a01323446653a7398eff9a324e1447553ba76ff841a403de2c956bcfd4ba,PodSandboxId:15bfe51f73f1d04fcb452c2b9823a6053077a02ca13b7b7df4a96dbe1c4bf4d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469167160050965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe,PodSandboxId:931eb703b91227059694c9d315f970e72afc388356a0cfa6d19123d46318443c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720469167067135118,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4f49v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f0b50ca-73cb-4ffb-9676-09e3a28d7636,},Annotations:map[string]string{io.kubernetes.container.hash: e995f17e,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4,PodSandboxId:f8007a8b858804e1684daacb3e997ad84fc9526c28a381f45996f4312bc79c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720469166796838413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d02def4-3af1-4268-a8fa-072c6fd71c83,},Annotations:map[string]string{io.kubernetes.container.hash: 325c63e9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f47fb0f400e915295b2ec21e227b8000e1936d00aa1e9265345bcf18da00776,PodSandboxId:fe17ace71d58c0de7ba910b637efe0025726e7dccf5a0e662e230cc9592510be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720469166944613398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2303835cb3470ac48e1c2f7eeacbd0c55e180b7acf710d2929e5f1f7c987570,PodSandboxId:25a60047a3471b730682dbec45488cafda18409d3c87edf499bc4e25a2c88906,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720469166990626313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42b9f3
82d32fb78346f5160840013b51,},Annotations:map[string]string{io.kubernetes.container.hash: 558d1512,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c97c9bf4b2ba515d9c57ff1ad82fdc07c3fa398efe0f30e200eeb4afa9b8b6d,PodSandboxId:933bc29f90e9808f147ef51a67f08bb1bc76f5e51a4380d2fe5089323f512648,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720469166802683027,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string
]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59de9e6a107817af76862bda008f35a5bdbc9c446829a20e23b865829f0e4faa,PodSandboxId:a6e9ec1666c2b5d84b8d8ed23bd1000f09feac56b92f1424d3acad5ea10cf051,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720469166736092660,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a571722211ffd00c8b1df39a68520333,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bde8b17ea0c0a6fdba42f0b205c7d9bcbc19c9c1b529fc4a8f65bd2e6c9c994,PodSandboxId:93bc3377fc8a32869f1698a0c90a2260b2d53df153fb28fe29e1ab8bebe272dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720469166726621718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ad4f76c216a96416007b988fb821e01602b71a0ced63cf928a9a38ed0db830,PodSandboxId:b1cbe60f17e1a57555fe5615bd406855bcfd913d81cef382d144ac5c297e60a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720468678300626732,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w8l78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dc81a07-5014-49b4-9c2f-e1806d1705e3,},Annotations:map[string]string{io.kubernet
es.container.hash: bb0edd48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa,PodSandboxId:a361ba0082084c514a691b64316861ead9b8e375eb7cd40b33afd6af1af03f89,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535991336335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6m9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45dd66-3096-4878-8b2b-96dcf12bbef2,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfbfbc3,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7,PodSandboxId:3765b2ad464be0e39e9167ec31c3d2778d67836a720a645b4215163b188c3c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720468535981042931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-4lzjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bcfc11d-8368-4c95-bf64-5b3d09c4b455,},Annotations:map[string]string{io.kubernetes.container.hash: 533d4b11,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19,PodSandboxId:8cba18d6a0140bc25d48e77f0a2e64729135c972df7df084b6c8aa9240c7156b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f
6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720468532672426940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmkjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb7c00aa-f846-430e-92a2-04cd2fc8a62b,},Annotations:map[string]string{io.kubernetes.container.hash: bb9acdc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e,PodSandboxId:2e4a76498c1cf7d5f8db02dd3b8e0bae0eb580df6dee167a04024a11c16d3a4a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d76
91a75a899,State:CONTAINER_EXITED,CreatedAt:1720468512224119753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92a647e1bb34408bc27cdc3497f9940,},Annotations:map[string]string{io.kubernetes.container.hash: b85a6327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9,PodSandboxId:bc2b7b56fb60f00fa572ac05479afa32f687953141db6574b3994de1ea0ef0c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt
:1720468512188864425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-511021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c3ccf7626b62492304c03ada682e9ee,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=baa84b06-9249-47cc-a282-d26f20f5229f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d7e9a087907d7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       5                   f8007a8b85880       storage-provisioner
	9a092bcfc2c4c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      4 minutes ago       Running             kindnet-cni               3                   931eb703b9122       kindnet-4f49v
	8ad6fd7c3f9ca       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   2                   a6e9ec1666c2b       kube-controller-manager-ha-511021
	ffb8ddfc4919d       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            3                   25a60047a3471       kube-apiserver-ha-511021
	9ec27cea09fe4       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   f97c5267622e6       busybox-fc5497c4f-w8l78
	ad3fc8ef1e029       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   f586e16265310       kube-vip-ha-511021
	80d3a01323446       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   15bfe51f73f1d       coredns-7db6d8ff4d-w6m9c
	38802be5ddf5a       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      5 minutes ago       Exited              kindnet-cni               2                   931eb703b9122       kindnet-4f49v
	6dea00926f165       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      5 minutes ago       Running             kube-proxy                1                   1c8757727c079       kube-proxy-tmkjf
	a2303835cb347       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      5 minutes ago       Exited              kube-apiserver            2                   25a60047a3471       kube-apiserver-ha-511021
	4f47fb0f400e9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   fe17ace71d58c       coredns-7db6d8ff4d-4lzjf
	3c97c9bf4b2ba       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   933bc29f90e98       etcd-ha-511021
	6b4723de2bd2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       4                   f8007a8b85880       storage-provisioner
	59de9e6a10781       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      5 minutes ago       Exited              kube-controller-manager   1                   a6e9ec1666c2b       kube-controller-manager-ha-511021
	7bde8b17ea0c0       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      5 minutes ago       Running             kube-scheduler            1                   93bc3377fc8a3       kube-scheduler-ha-511021
	f1ad4f76c216a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   b1cbe60f17e1a       busybox-fc5497c4f-w8l78
	6b083875d2679       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   a361ba0082084       coredns-7db6d8ff4d-w6m9c
	499dc5b41a3d6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   3765b2ad464be       coredns-7db6d8ff4d-4lzjf
	67153dce61aaa       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      16 minutes ago      Exited              kube-proxy                0                   8cba18d6a0140       kube-proxy-tmkjf
	08189f5ac12ce       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   2e4a76498c1cf       etcd-ha-511021
	019d794c36af8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      16 minutes ago      Exited              kube-scheduler            0                   bc2b7b56fb60f       kube-scheduler-ha-511021
	
	
	==> coredns [499dc5b41a3d6636ec79d235681a8e1219975278547efeb9ef937d1c28d364a7] <==
	[INFO] 10.244.1.2:48742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218522s
	[INFO] 10.244.1.2:60141 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145244s
	[INFO] 10.244.0.4:58500 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001476805s
	[INFO] 10.244.0.4:53415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090934s
	[INFO] 10.244.0.4:60685 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159681s
	[INFO] 10.244.2.2:35117 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216541s
	[INFO] 10.244.2.2:56929 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000209242s
	[INFO] 10.244.2.2:57601 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099474s
	[INFO] 10.244.1.2:51767 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189518s
	[INFO] 10.244.1.2:53177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013929s
	[INFO] 10.244.0.4:44104 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095184s
	[INFO] 10.244.2.2:51012 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106956s
	[INFO] 10.244.2.2:37460 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124276s
	[INFO] 10.244.2.2:46238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124359s
	[INFO] 10.244.1.2:56514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153739s
	[INFO] 10.244.1.2:45870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000362406s
	[INFO] 10.244.0.4:54901 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101371s
	[INFO] 10.244.0.4:38430 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128119s
	[INFO] 10.244.0.4:59433 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112582s
	[INFO] 10.244.2.2:50495 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000089543s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4f47fb0f400e915295b2ec21e227b8000e1936d00aa1e9265345bcf18da00776] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[132853472]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (08-Jul-2024 20:06:21.688) (total time: 10402ms):
	Trace[132853472]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43864->10.96.0.1:443: read: connection reset by peer 10402ms (20:06:32.091)
	Trace[132853472]: [10.402828587s] [10.402828587s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6b083875d267933068ab737294f211111c3641dc1c794cdf44812a3790f1a9fa] <==
	[INFO] 10.244.0.4:45493 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011856s
	[INFO] 10.244.0.4:43450 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049467s
	[INFO] 10.244.0.4:42950 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177837s
	[INFO] 10.244.2.2:44783 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001772539s
	[INFO] 10.244.2.2:60536 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011424s
	[INFO] 10.244.2.2:56160 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090498s
	[INFO] 10.244.2.2:60942 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001479529s
	[INFO] 10.244.2.2:59066 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078767s
	[INFO] 10.244.1.2:33094 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298986s
	[INFO] 10.244.1.2:41194 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092808s
	[INFO] 10.244.0.4:44172 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168392s
	[INFO] 10.244.0.4:47644 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085824s
	[INFO] 10.244.0.4:45776 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131918s
	[INFO] 10.244.2.2:53642 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164258s
	[INFO] 10.244.1.2:32877 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000282103s
	[INFO] 10.244.1.2:59022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013901s
	[INFO] 10.244.0.4:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129873s
	[INFO] 10.244.2.2:48648 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161626s
	[INFO] 10.244.2.2:59172 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147702s
	[INFO] 10.244.2.2:45542 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156821s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [80d3a01323446653a7398eff9a324e1447553ba76ff841a403de2c956bcfd4ba] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1424619691]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (08-Jul-2024 20:06:16.235) (total time: 10000ms):
	Trace[1424619691]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (20:06:26.236)
	Trace[1424619691]: [10.000852418s] [10.000852418s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35526->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1982365158]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (08-Jul-2024 20:06:18.741) (total time: 13349ms):
	Trace[1982365158]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35526->10.96.0.1:443: read: connection reset by peer 13349ms (20:06:32.090)
	Trace[1982365158]: [13.349789332s] [13.349789332s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35526->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-511021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T19_55_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:55:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:11:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:06:47 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:06:47 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:06:47 +0000   Mon, 08 Jul 2024 19:55:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:06:47 +0000   Mon, 08 Jul 2024 19:55:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.33
	  Hostname:    ha-511021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b87893acdd9a476ea34795541f3789df
	  System UUID:                b87893ac-dd9a-476e-a347-95541f3789df
	  Boot ID:                    17494c0f-24c9-4604-bfc5-8f8d6538a4f6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w8l78              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-4lzjf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-w6m9c             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-511021                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-4f49v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-511021             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-511021    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-tmkjf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-511021             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-511021                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 16m    kube-proxy       
	  Normal   Starting                 4m48s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-511021 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-511021 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-511021 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m    node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal   NodeReady                16m    kubelet          Node ha-511021 status is now: NodeReady
	  Normal   RegisteredNode           14m    node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Warning  ContainerGCFailed        6m20s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m44s  node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal   RegisteredNode           4m35s  node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	  Normal   RegisteredNode           3m11s  node-controller  Node ha-511021 event: Registered Node ha-511021 in Controller
	
	
	Name:               ha-511021-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_56_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:11:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:07:31 +0000   Mon, 08 Jul 2024 20:06:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:07:31 +0000   Mon, 08 Jul 2024 20:06:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:07:31 +0000   Mon, 08 Jul 2024 20:06:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:07:31 +0000   Mon, 08 Jul 2024 20:06:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-511021-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 09ff24d6fb9848b0b108f4ecb99eedc3
	  System UUID:                09ff24d6-fb98-48b0-b108-f4ecb99eedc3
	  Boot ID:                    c44a5023-6fe5-4076-a69c-531dc15a7a1c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xjfx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-511021-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-gn8kn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-511021-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-511021-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-976tb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-511021-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-511021-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                    node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-511021-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-511021-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-511021-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-511021-m02 status is now: NodeNotReady
	  Normal  Starting                 5m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node ha-511021-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node ha-511021-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node ha-511021-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  RegisteredNode           4m35s                  node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-511021-m02 event: Registered Node ha-511021-m02 in Controller
	
	
	Name:               ha-511021-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-511021-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-511021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T19_58_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:58:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-511021-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:09:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Jul 2024 20:08:49 +0000   Mon, 08 Jul 2024 20:09:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Jul 2024 20:08:49 +0000   Mon, 08 Jul 2024 20:09:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Jul 2024 20:08:49 +0000   Mon, 08 Jul 2024 20:09:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Jul 2024 20:08:49 +0000   Mon, 08 Jul 2024 20:09:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    ha-511021-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef479bd2efc3487eb39d936b4399c97b
	  System UUID:                ef479bd2-efc3-487e-b39d-936b4399c97b
	  Boot ID:                    600c1f2b-1d13-4908-ad9e-08608ab905a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6qz76    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-bbbp6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-7mb58           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-511021-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-511021-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-511021-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-511021-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m44s                  node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   RegisteredNode           4m35s                  node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   NodeNotReady             4m4s                   node-controller  Node ha-511021-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-511021-m04 event: Registered Node ha-511021-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node ha-511021-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node ha-511021-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node ha-511021-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-511021-m04 has been rebooted, boot id: 600c1f2b-1d13-4908-ad9e-08608ab905a7
	  Normal   NodeReady                2m49s                  kubelet          Node ha-511021-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-511021-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.119364] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.209787] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.142097] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.285009] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.308511] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.058301] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.483782] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.535916] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.022132] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.103961] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.289495] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.234845] kauditd_printk_skb: 72 callbacks suppressed
	[Jul 8 20:02] kauditd_printk_skb: 1 callbacks suppressed
	[Jul 8 20:05] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[  +0.148703] systemd-fstab-generator[3799]: Ignoring "noauto" option for root device
	[  +0.196171] systemd-fstab-generator[3813]: Ignoring "noauto" option for root device
	[  +0.142590] systemd-fstab-generator[3825]: Ignoring "noauto" option for root device
	[  +0.310749] systemd-fstab-generator[3853]: Ignoring "noauto" option for root device
	[Jul 8 20:06] systemd-fstab-generator[3954]: Ignoring "noauto" option for root device
	[  +0.084307] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.892132] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.268208] kauditd_printk_skb: 86 callbacks suppressed
	[ +17.095890] kauditd_printk_skb: 1 callbacks suppressed
	[ +20.309309] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.477460] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [08189f5ac12cee8e063e930d7fc2e230deb92f971d368cd8cebc53f10da10c7e] <==
	{"level":"info","ts":"2024-07-08T20:04:20.433963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T20:04:20.434013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T20:04:20.434027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c received MsgPreVoteResp from 578695e7c923614c at term 2"}
	{"level":"info","ts":"2024-07-08T20:04:20.434041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c [logterm: 2, index: 2131] sent MsgPreVote request to 6e4a8f4a221cc134 at term 2"}
	{"level":"info","ts":"2024-07-08T20:04:20.434048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c [logterm: 2, index: 2131] sent MsgPreVote request to 9075682618332c40 at term 2"}
	{"level":"warn","ts":"2024-07-08T20:04:20.461675Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.33:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T20:04:20.461859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.33:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-08T20:04:20.461969Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"578695e7c923614c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-08T20:04:20.462169Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462204Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462229Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462319Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462405Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462546Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462606Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:04:20.462633Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.462721Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.462872Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.463092Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.463204Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.463316Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"578695e7c923614c","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.463406Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6e4a8f4a221cc134"}
	{"level":"info","ts":"2024-07-08T20:04:20.467132Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.33:2380"}
	{"level":"info","ts":"2024-07-08T20:04:20.467305Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.33:2380"}
	{"level":"info","ts":"2024-07-08T20:04:20.467338Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-511021","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.33:2380"],"advertise-client-urls":["https://192.168.39.33:2379"]}
	
	
	==> etcd [3c97c9bf4b2ba515d9c57ff1ad82fdc07c3fa398efe0f30e200eeb4afa9b8b6d] <==
	{"level":"info","ts":"2024-07-08T20:08:05.071384Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"578695e7c923614c","to":"9075682618332c40","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-08T20:08:05.071601Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:08:05.07812Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"578695e7c923614c","to":"9075682618332c40","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-08T20:08:05.078194Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:08:05.143939Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:09:03.924704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"578695e7c923614c switched to configuration voters=(6306893150923481420 7947322041011323188)"}
	{"level":"info","ts":"2024-07-08T20:09:03.927092Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"ef95fe71d176e4d2","local-member-id":"578695e7c923614c","removed-remote-peer-id":"9075682618332c40","removed-remote-peer-urls":["https://192.168.39.70:2380"]}
	{"level":"info","ts":"2024-07-08T20:09:03.927215Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9075682618332c40"}
	{"level":"warn","ts":"2024-07-08T20:09:03.927376Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"578695e7c923614c","removed-member-id":"9075682618332c40"}
	{"level":"warn","ts":"2024-07-08T20:09:03.927591Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-07-08T20:09:03.927975Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:09:03.928024Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9075682618332c40"}
	{"level":"warn","ts":"2024-07-08T20:09:03.928332Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:09:03.928381Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:09:03.928444Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"warn","ts":"2024-07-08T20:09:03.928681Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40","error":"context canceled"}
	{"level":"warn","ts":"2024-07-08T20:09:03.928751Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"9075682618332c40","error":"failed to read 9075682618332c40 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-08T20:09:03.928868Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"warn","ts":"2024-07-08T20:09:03.929095Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40","error":"context canceled"}
	{"level":"info","ts":"2024-07-08T20:09:03.92918Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"578695e7c923614c","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:09:03.929223Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:09:03.929238Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"578695e7c923614c","removed-remote-peer-id":"9075682618332c40"}
	{"level":"info","ts":"2024-07-08T20:09:03.929272Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"578695e7c923614c","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"9075682618332c40"}
	{"level":"warn","ts":"2024-07-08T20:09:03.94436Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"578695e7c923614c","remote-peer-id-stream-handler":"578695e7c923614c","remote-peer-id-from":"9075682618332c40"}
	{"level":"warn","ts":"2024-07-08T20:09:03.952165Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"578695e7c923614c","remote-peer-id-stream-handler":"578695e7c923614c","remote-peer-id-from":"9075682618332c40"}
	
	
	==> kernel <==
	 20:11:38 up 16 min,  0 users,  load average: 0.34, 0.32, 0.21
	Linux ha-511021 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [38802be5ddf5a10afb78b7100b1dd555db233a693a398965ccca1743380bb1fe] <==
	I0708 20:06:07.481556       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0708 20:06:07.546887       1 main.go:107] hostIP = 192.168.39.33
	podIP = 192.168.39.33
	I0708 20:06:07.547165       1 main.go:116] setting mtu 1500 for CNI 
	I0708 20:06:07.547259       1 main.go:146] kindnetd IP family: "ipv4"
	I0708 20:06:07.547311       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0708 20:06:17.796070       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0708 20:06:27.805052       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0708 20:06:29.017339       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0708 20:06:32.090291       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0708 20:06:35.161597       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [9a092bcfc2c4cf52b3a7a13ad5de69f2705f9f47507b1ff3c846fd063dc62b0e] <==
	I0708 20:10:57.116901       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:11:07.153601       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:11:07.153649       1 main.go:227] handling current node
	I0708 20:11:07.153665       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:11:07.153670       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:11:07.153839       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:11:07.153867       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:11:17.165214       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:11:17.165344       1 main.go:227] handling current node
	I0708 20:11:17.165400       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:11:17.165407       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:11:17.165564       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:11:17.165590       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:11:27.184853       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:11:27.184890       1 main.go:227] handling current node
	I0708 20:11:27.184902       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:11:27.184907       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:11:27.185025       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:11:27.185054       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	I0708 20:11:37.192150       1 main.go:223] Handling node with IPs: map[192.168.39.33:{}]
	I0708 20:11:37.192187       1 main.go:227] handling current node
	I0708 20:11:37.192197       1 main.go:223] Handling node with IPs: map[192.168.39.216:{}]
	I0708 20:11:37.192202       1 main.go:250] Node ha-511021-m02 has CIDR [10.244.1.0/24] 
	I0708 20:11:37.192292       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0708 20:11:37.192296       1 main.go:250] Node ha-511021-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a2303835cb3470ac48e1c2f7eeacbd0c55e180b7acf710d2929e5f1f7c987570] <==
	I0708 20:06:07.641731       1 options.go:221] external host was not specified, using 192.168.39.33
	I0708 20:06:07.646930       1 server.go:148] Version: v1.30.2
	I0708 20:06:07.646992       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:06:08.204087       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0708 20:06:08.209975       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 20:06:08.214079       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0708 20:06:08.214211       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0708 20:06:08.214484       1 instance.go:299] Using reconciler: lease
	W0708 20:06:28.204900       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0708 20:06:28.204901       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0708 20:06:28.215014       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [ffb8ddfc4919dff163e345f60e168e06f35c9d2988df41561e920c4448bd8fed] <==
	I0708 20:06:50.963685       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0708 20:06:51.119393       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0708 20:06:51.127396       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 20:06:51.127470       1 policy_source.go:224] refreshing policies
	I0708 20:06:51.136539       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0708 20:06:51.136621       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0708 20:06:51.137564       1 shared_informer.go:320] Caches are synced for configmaps
	I0708 20:06:51.138210       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0708 20:06:51.145586       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0708 20:06:51.150888       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0708 20:06:51.148411       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0708 20:06:51.148449       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0708 20:06:51.151899       1 aggregator.go:165] initial CRD sync complete...
	I0708 20:06:51.151948       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 20:06:51.151973       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 20:06:51.152018       1 cache.go:39] Caches are synced for autoregister controller
	W0708 20:06:51.162536       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.216 192.168.39.70]
	I0708 20:06:51.164843       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 20:06:51.175970       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0708 20:06:51.182920       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0708 20:06:51.203083       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 20:06:51.954116       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0708 20:06:52.402084       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.216 192.168.39.33 192.168.39.70]
	W0708 20:07:02.408083       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.216 192.168.39.33]
	W0708 20:09:12.410449       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.216 192.168.39.33]
	
	
	==> kube-controller-manager [59de9e6a107817af76862bda008f35a5bdbc9c446829a20e23b865829f0e4faa] <==
	I0708 20:06:07.924599       1 serving.go:380] Generated self-signed cert in-memory
	I0708 20:06:08.877167       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0708 20:06:08.877210       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:06:08.879203       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0708 20:06:08.879889       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0708 20:06:08.880018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 20:06:08.880106       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0708 20:06:29.222659       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.33:8443/healthz\": dial tcp 192.168.39.33:8443: connect: connection refused"
	
	
	==> kube-controller-manager [8ad6fd7c3f9cad31104529097c8feeb16ff0c5ce58c2ed27a50b3743232c0bc5] <==
	E0708 20:09:43.241762       1 gc_controller.go:153] "Failed to get node" err="node \"ha-511021-m03\" not found" logger="pod-garbage-collector-controller" node="ha-511021-m03"
	E0708 20:09:43.241868       1 gc_controller.go:153] "Failed to get node" err="node \"ha-511021-m03\" not found" logger="pod-garbage-collector-controller" node="ha-511021-m03"
	E0708 20:09:43.241898       1 gc_controller.go:153] "Failed to get node" err="node \"ha-511021-m03\" not found" logger="pod-garbage-collector-controller" node="ha-511021-m03"
	E0708 20:09:43.241926       1 gc_controller.go:153] "Failed to get node" err="node \"ha-511021-m03\" not found" logger="pod-garbage-collector-controller" node="ha-511021-m03"
	I0708 20:09:53.410871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.348524ms"
	I0708 20:09:53.410989       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.444µs"
	E0708 20:10:03.242681       1 gc_controller.go:153] "Failed to get node" err="node \"ha-511021-m03\" not found" logger="pod-garbage-collector-controller" node="ha-511021-m03"
	E0708 20:10:03.242729       1 gc_controller.go:153] "Failed to get node" err="node \"ha-511021-m03\" not found" logger="pod-garbage-collector-controller" node="ha-511021-m03"
	E0708 20:10:03.242736       1 gc_controller.go:153] "Failed to get node" err="node \"ha-511021-m03\" not found" logger="pod-garbage-collector-controller" node="ha-511021-m03"
	E0708 20:10:03.242741       1 gc_controller.go:153] "Failed to get node" err="node \"ha-511021-m03\" not found" logger="pod-garbage-collector-controller" node="ha-511021-m03"
	E0708 20:10:03.242746       1 gc_controller.go:153] "Failed to get node" err="node \"ha-511021-m03\" not found" logger="pod-garbage-collector-controller" node="ha-511021-m03"
	I0708 20:10:03.257989       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-kfpzq"
	I0708 20:10:03.293476       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-kfpzq"
	I0708 20:10:03.293563       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-511021-m03"
	I0708 20:10:03.325713       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-511021-m03"
	I0708 20:10:03.325890       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-511021-m03"
	I0708 20:10:03.371338       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-511021-m03"
	I0708 20:10:03.371429       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-511021-m03"
	I0708 20:10:03.396356       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-511021-m03"
	I0708 20:10:03.396474       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-511021-m03"
	I0708 20:10:03.429247       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-511021-m03"
	I0708 20:10:03.429500       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-511021-m03"
	I0708 20:10:03.501595       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-511021-m03"
	I0708 20:10:03.501693       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-scxw5"
	I0708 20:10:03.541143       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-scxw5"
	
	
	==> kube-proxy [67153dce61aaa3860dc983a0fa9fbb17f7e85439ca3883b1d06fbcf365ab6e19] <==
	E0708 20:03:14.012071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:17.081591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:17.081877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:17.082488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:17.082626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:17.082875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:17.082977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:23.226892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:23.226960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:23.226892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:23.226990       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:23.227399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:23.227567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:32.441428       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:32.441505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:32.442463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:32.442532       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:38.586474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:38.586557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:47.802128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:47.803082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:03:57.017962       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:03:57.018369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-511021&resourceVersion=1785": dial tcp 192.168.39.254:8443: connect: no route to host
	W0708 20:04:03.162461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0708 20:04:03.162528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [6dea00926f165df26d06c6421a15f2c6f0124a7ee17dcff8893fa517b3e434a7] <==
	I0708 20:06:08.319155       1 server_linux.go:69] "Using iptables proxy"
	E0708 20:06:09.114747       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-511021\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0708 20:06:12.186459       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-511021\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0708 20:06:15.258101       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-511021\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0708 20:06:21.404520       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-511021\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0708 20:06:30.617720       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-511021\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0708 20:06:49.326403       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.33"]
	I0708 20:06:49.374073       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 20:06:49.374152       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 20:06:49.374170       1 server_linux.go:165] "Using iptables Proxier"
	I0708 20:06:49.376645       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 20:06:49.376935       1 server.go:872] "Version info" version="v1.30.2"
	I0708 20:06:49.376966       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:06:49.378154       1 config.go:192] "Starting service config controller"
	I0708 20:06:49.378193       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 20:06:49.378589       1 config.go:101] "Starting endpoint slice config controller"
	I0708 20:06:49.378619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 20:06:49.379622       1 config.go:319] "Starting node config controller"
	I0708 20:06:49.379686       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 20:06:49.478909       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 20:06:49.479011       1 shared_informer.go:320] Caches are synced for service config
	I0708 20:06:49.482316       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [019d794c36af8e900693ecc2a2ef2b53d643327f63bd24a2d7d125b8339528e9] <==
	E0708 20:04:17.672311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 20:04:17.958171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 20:04:17.958202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 20:04:17.997560       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 20:04:17.997654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 20:04:18.373136       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 20:04:18.373236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 20:04:18.550934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 20:04:18.551079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 20:04:18.712888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 20:04:18.712923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 20:04:18.842342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 20:04:18.842392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 20:04:19.130897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 20:04:19.130983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 20:04:19.242744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 20:04:19.242909       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 20:04:19.539582       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 20:04:19.539629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 20:04:20.278662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:04:20.278720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0708 20:04:20.355573       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0708 20:04:20.355741       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0708 20:04:20.355987       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0708 20:04:20.365654       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7bde8b17ea0c0a6fdba42f0b205c7d9bcbc19c9c1b529fc4a8f65bd2e6c9c994] <==
	W0708 20:06:47.804555       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.33:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:47.804693       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.33:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:47.855102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.33:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:47.855329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.33:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:48.046647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:48.046753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:48.310034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:48.310076       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:48.803510       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:48.803580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.33:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:48.937296       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.33:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	E0708 20:06:48.937334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.33:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.33:8443: connect: connection refused
	W0708 20:06:50.972766       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0708 20:06:50.985353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0708 20:06:50.974503       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 20:06:50.974678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:06:50.989674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:06:50.989648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 20:06:51.052250       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 20:06:51.052283       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0708 20:07:04.237737       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0708 20:09:00.621481       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6qz76\": pod busybox-fc5497c4f-6qz76 is already assigned to node \"ha-511021-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-6qz76" node="ha-511021-m04"
	E0708 20:09:00.621658       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod daf9e298-1bb9-4f48-a054-bd80af0c3646(default/busybox-fc5497c4f-6qz76) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-6qz76"
	E0708 20:09:00.621707       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6qz76\": pod busybox-fc5497c4f-6qz76 is already assigned to node \"ha-511021-m04\"" pod="default/busybox-fc5497c4f-6qz76"
	I0708 20:09:00.621728       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-6qz76" node="ha-511021-m04"
	
	
	==> kubelet <==
	Jul 08 20:07:25 ha-511021 kubelet[1369]: I0708 20:07:25.890901    1369 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-511021"
	Jul 08 20:07:26 ha-511021 kubelet[1369]: I0708 20:07:26.867280    1369 scope.go:117] "RemoveContainer" containerID="6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4"
	Jul 08 20:07:26 ha-511021 kubelet[1369]: E0708 20:07:26.867613    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7d02def4-3af1-4268-a8fa-072c6fd71c83)\"" pod="kube-system/storage-provisioner" podUID="7d02def4-3af1-4268-a8fa-072c6fd71c83"
	Jul 08 20:07:37 ha-511021 kubelet[1369]: I0708 20:07:37.866997    1369 scope.go:117] "RemoveContainer" containerID="6b4723de2bd2ff0028f3c55c8d010ac190538f8f93cce006a21056b000c757e4"
	Jul 08 20:07:38 ha-511021 kubelet[1369]: I0708 20:07:38.781598    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-511021" podStartSLOduration=13.781553079 podStartE2EDuration="13.781553079s" podCreationTimestamp="2024-07-08 20:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 20:07:32.972574935 +0000 UTC m=+734.299229264" watchObservedRunningTime="2024-07-08 20:07:38.781553079 +0000 UTC m=+740.108207428"
	Jul 08 20:08:18 ha-511021 kubelet[1369]: E0708 20:08:18.947230    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:08:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:08:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:08:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:08:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 20:09:18 ha-511021 kubelet[1369]: E0708 20:09:18.947371    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:09:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:09:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:09:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:09:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 20:10:18 ha-511021 kubelet[1369]: E0708 20:10:18.949628    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:10:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:10:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:10:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:10:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 20:11:18 ha-511021 kubelet[1369]: E0708 20:11:18.949172    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:11:18 ha-511021 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:11:18 ha-511021 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:11:18 ha-511021 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:11:18 ha-511021 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:11:37.430836   34198 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19195-5988/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-511021 -n ha-511021
helpers_test.go:261: (dbg) Run:  kubectl --context ha-511021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (311.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-957088
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-957088
E0708 20:26:29.733457   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-957088: exit status 82 (2m2.001231704s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-957088-m03"  ...
	* Stopping node "multinode-957088-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-957088" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-957088 --wait=true -v=8 --alsologtostderr
E0708 20:29:23.844760   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-957088 --wait=true -v=8 --alsologtostderr: (3m6.932646748s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-957088
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-957088 -n multinode-957088
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-957088 logs -n 25: (1.555329031s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m02:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4089420253/001/cp-test_multinode-957088-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m02:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088:/home/docker/cp-test_multinode-957088-m02_multinode-957088.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n multinode-957088 sudo cat                                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /home/docker/cp-test_multinode-957088-m02_multinode-957088.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m02:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03:/home/docker/cp-test_multinode-957088-m02_multinode-957088-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n multinode-957088-m03 sudo cat                                   | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /home/docker/cp-test_multinode-957088-m02_multinode-957088-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp testdata/cp-test.txt                                                | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m03:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4089420253/001/cp-test_multinode-957088-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m03:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088:/home/docker/cp-test_multinode-957088-m03_multinode-957088.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n multinode-957088 sudo cat                                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /home/docker/cp-test_multinode-957088-m03_multinode-957088.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m03:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m02:/home/docker/cp-test_multinode-957088-m03_multinode-957088-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n multinode-957088-m02 sudo cat                                   | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /home/docker/cp-test_multinode-957088-m03_multinode-957088-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-957088 node stop m03                                                          | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	| node    | multinode-957088 node start                                                             | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-957088                                                                | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC |                     |
	| stop    | -p multinode-957088                                                                     | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC |                     |
	| start   | -p multinode-957088                                                                     | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:27 UTC | 08 Jul 24 20:30 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-957088                                                                | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:30 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:27:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:27:52.506407   43874 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:27:52.506677   43874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:27:52.506686   43874 out.go:304] Setting ErrFile to fd 2...
	I0708 20:27:52.506691   43874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:27:52.506879   43874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:27:52.507403   43874 out.go:298] Setting JSON to false
	I0708 20:27:52.508243   43874 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4221,"bootTime":1720466251,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:27:52.508298   43874 start.go:139] virtualization: kvm guest
	I0708 20:27:52.510536   43874 out.go:177] * [multinode-957088] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:27:52.511793   43874 notify.go:220] Checking for updates...
	I0708 20:27:52.511803   43874 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:27:52.513036   43874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:27:52.514502   43874 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:27:52.515756   43874 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:27:52.516985   43874 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:27:52.518363   43874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:27:52.520130   43874 config.go:182] Loaded profile config "multinode-957088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:27:52.520213   43874 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:27:52.520604   43874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:27:52.520682   43874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:27:52.535899   43874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0708 20:27:52.536385   43874 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:27:52.536932   43874 main.go:141] libmachine: Using API Version  1
	I0708 20:27:52.536955   43874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:27:52.537340   43874 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:27:52.537534   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:27:52.572176   43874 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:27:52.573354   43874 start.go:297] selected driver: kvm2
	I0708 20:27:52.573367   43874 start.go:901] validating driver "kvm2" against &{Name:multinode-957088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-957088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:27:52.573496   43874 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:27:52.573803   43874 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:27:52.573871   43874 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:27:52.588344   43874 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:27:52.588968   43874 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:27:52.589023   43874 cni.go:84] Creating CNI manager for ""
	I0708 20:27:52.589035   43874 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0708 20:27:52.589083   43874 start.go:340] cluster config:
	{Name:multinode-957088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-957088 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:27:52.589198   43874 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:27:52.590924   43874 out.go:177] * Starting "multinode-957088" primary control-plane node in "multinode-957088" cluster
	I0708 20:27:52.592097   43874 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:27:52.592134   43874 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 20:27:52.592140   43874 cache.go:56] Caching tarball of preloaded images
	I0708 20:27:52.592206   43874 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:27:52.592216   43874 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 20:27:52.592318   43874 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/config.json ...
	I0708 20:27:52.592491   43874 start.go:360] acquireMachinesLock for multinode-957088: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:27:52.592528   43874 start.go:364] duration metric: took 20.835µs to acquireMachinesLock for "multinode-957088"
	I0708 20:27:52.592542   43874 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:27:52.592553   43874 fix.go:54] fixHost starting: 
	I0708 20:27:52.592792   43874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:27:52.592818   43874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:27:52.606901   43874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I0708 20:27:52.607318   43874 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:27:52.607779   43874 main.go:141] libmachine: Using API Version  1
	I0708 20:27:52.607802   43874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:27:52.608196   43874 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:27:52.608509   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:27:52.608689   43874 main.go:141] libmachine: (multinode-957088) Calling .GetState
	I0708 20:27:52.610207   43874 fix.go:112] recreateIfNeeded on multinode-957088: state=Running err=<nil>
	W0708 20:27:52.610236   43874 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:27:52.612228   43874 out.go:177] * Updating the running kvm2 "multinode-957088" VM ...
	I0708 20:27:52.613578   43874 machine.go:94] provisionDockerMachine start ...
	I0708 20:27:52.613599   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:27:52.613799   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:52.616165   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.616565   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:52.616600   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.616720   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:27:52.616871   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.617056   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.617199   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:27:52.617381   43874 main.go:141] libmachine: Using SSH client type: native
	I0708 20:27:52.617620   43874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0708 20:27:52.617632   43874 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:27:52.728870   43874 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-957088
	
	I0708 20:27:52.728904   43874 main.go:141] libmachine: (multinode-957088) Calling .GetMachineName
	I0708 20:27:52.729203   43874 buildroot.go:166] provisioning hostname "multinode-957088"
	I0708 20:27:52.729227   43874 main.go:141] libmachine: (multinode-957088) Calling .GetMachineName
	I0708 20:27:52.729410   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:52.732202   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.732510   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:52.732542   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.732689   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:27:52.732886   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.733084   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.733281   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:27:52.733458   43874 main.go:141] libmachine: Using SSH client type: native
	I0708 20:27:52.733607   43874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0708 20:27:52.733619   43874 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-957088 && echo "multinode-957088" | sudo tee /etc/hostname
	I0708 20:27:52.856905   43874 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-957088
	
	I0708 20:27:52.856934   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:52.859429   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.859762   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:52.859790   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.859924   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:27:52.860110   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.860246   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.860342   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:27:52.860468   43874 main.go:141] libmachine: Using SSH client type: native
	I0708 20:27:52.860735   43874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0708 20:27:52.860766   43874 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-957088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-957088/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-957088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:27:52.968499   43874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:27:52.968532   43874 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:27:52.968554   43874 buildroot.go:174] setting up certificates
	I0708 20:27:52.968566   43874 provision.go:84] configureAuth start
	I0708 20:27:52.968577   43874 main.go:141] libmachine: (multinode-957088) Calling .GetMachineName
	I0708 20:27:52.968886   43874 main.go:141] libmachine: (multinode-957088) Calling .GetIP
	I0708 20:27:52.971504   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.971859   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:52.971881   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.972029   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:52.974255   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.974559   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:52.974587   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.974750   43874 provision.go:143] copyHostCerts
	I0708 20:27:52.974778   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:27:52.974804   43874 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:27:52.974813   43874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:27:52.974884   43874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:27:52.974963   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:27:52.974979   43874 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:27:52.974985   43874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:27:52.975008   43874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:27:52.975046   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:27:52.975061   43874 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:27:52.975067   43874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:27:52.975086   43874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:27:52.975138   43874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.multinode-957088 san=[127.0.0.1 192.168.39.44 localhost minikube multinode-957088]
	I0708 20:27:53.029975   43874 provision.go:177] copyRemoteCerts
	I0708 20:27:53.030024   43874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:27:53.030049   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:53.032505   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:53.032868   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:53.032900   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:53.033086   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:27:53.033279   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:53.033416   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:27:53.033547   43874 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/multinode-957088/id_rsa Username:docker}
	I0708 20:27:53.118390   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 20:27:53.118449   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:27:53.145389   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 20:27:53.145495   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0708 20:27:53.172386   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 20:27:53.172462   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:27:53.198469   43874 provision.go:87] duration metric: took 229.879254ms to configureAuth
	I0708 20:27:53.198501   43874 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:27:53.198745   43874 config.go:182] Loaded profile config "multinode-957088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:27:53.198823   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:53.201225   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:53.201625   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:53.201657   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:53.201818   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:27:53.202005   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:53.202151   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:53.202320   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:27:53.202484   43874 main.go:141] libmachine: Using SSH client type: native
	I0708 20:27:53.202632   43874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0708 20:27:53.202646   43874 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:29:23.921944   43874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:29:23.921975   43874 machine.go:97] duration metric: took 1m31.30838381s to provisionDockerMachine
	I0708 20:29:23.921989   43874 start.go:293] postStartSetup for "multinode-957088" (driver="kvm2")
	I0708 20:29:23.921999   43874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:29:23.922049   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:29:23.922373   43874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:29:23.922397   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:29:23.925538   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:23.925913   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:23.925938   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:23.926070   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:29:23.926271   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:29:23.926425   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:29:23.926571   43874 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/multinode-957088/id_rsa Username:docker}
	I0708 20:29:24.013495   43874 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:29:24.017847   43874 command_runner.go:130] > NAME=Buildroot
	I0708 20:29:24.017873   43874 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0708 20:29:24.017879   43874 command_runner.go:130] > ID=buildroot
	I0708 20:29:24.017886   43874 command_runner.go:130] > VERSION_ID=2023.02.9
	I0708 20:29:24.017892   43874 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0708 20:29:24.017950   43874 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:29:24.017967   43874 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:29:24.018025   43874 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:29:24.018124   43874 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:29:24.018136   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /etc/ssl/certs/131412.pem
	I0708 20:29:24.018248   43874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:29:24.028598   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:29:24.071657   43874 start.go:296] duration metric: took 149.638569ms for postStartSetup
	I0708 20:29:24.071704   43874 fix.go:56] duration metric: took 1m31.479153778s for fixHost
	I0708 20:29:24.071727   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:29:24.074668   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.075177   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:24.075207   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.075370   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:29:24.075568   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:29:24.075724   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:29:24.075862   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:29:24.076034   43874 main.go:141] libmachine: Using SSH client type: native
	I0708 20:29:24.076225   43874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0708 20:29:24.076235   43874 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:29:24.184353   43874 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720470564.154325657
	
	I0708 20:29:24.184373   43874 fix.go:216] guest clock: 1720470564.154325657
	I0708 20:29:24.184381   43874 fix.go:229] Guest: 2024-07-08 20:29:24.154325657 +0000 UTC Remote: 2024-07-08 20:29:24.071708715 +0000 UTC m=+91.599039386 (delta=82.616942ms)
	I0708 20:29:24.184419   43874 fix.go:200] guest clock delta is within tolerance: 82.616942ms
	I0708 20:29:24.184428   43874 start.go:83] releasing machines lock for "multinode-957088", held for 1m31.591890954s
	I0708 20:29:24.184515   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:29:24.184806   43874 main.go:141] libmachine: (multinode-957088) Calling .GetIP
	I0708 20:29:24.187104   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.187440   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:24.187485   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.187622   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:29:24.188163   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:29:24.188335   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:29:24.188433   43874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:29:24.188469   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:29:24.188562   43874 ssh_runner.go:195] Run: cat /version.json
	I0708 20:29:24.188579   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:29:24.191033   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.191262   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.191535   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:24.191583   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.191696   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:29:24.191825   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:29:24.191878   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:24.191911   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.191946   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:29:24.192035   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:29:24.192099   43874 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/multinode-957088/id_rsa Username:docker}
	I0708 20:29:24.192154   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:29:24.192265   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:29:24.192380   43874 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/multinode-957088/id_rsa Username:docker}
	I0708 20:29:24.268638   43874 command_runner.go:130] > {"iso_version": "v1.33.1-1720011972-19186", "kicbase_version": "v0.0.44-1719972989-19184", "minikube_version": "v1.33.1", "commit": "31623406c84ecd024e1cf2c4d9dbac99bd5bb2b3"}
	I0708 20:29:24.268882   43874 ssh_runner.go:195] Run: systemctl --version
	I0708 20:29:24.292153   43874 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0708 20:29:24.292906   43874 command_runner.go:130] > systemd 252 (252)
	I0708 20:29:24.292951   43874 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0708 20:29:24.293010   43874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:29:24.455755   43874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0708 20:29:24.462332   43874 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0708 20:29:24.462516   43874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:29:24.462568   43874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:29:24.471992   43874 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0708 20:29:24.472017   43874 start.go:494] detecting cgroup driver to use...
	I0708 20:29:24.472084   43874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:29:24.488420   43874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:29:24.502410   43874 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:29:24.502472   43874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:29:24.516277   43874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:29:24.530250   43874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:29:24.673910   43874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:29:24.816353   43874 docker.go:233] disabling docker service ...
	I0708 20:29:24.816410   43874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:29:24.832875   43874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:29:24.846837   43874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:29:24.986614   43874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:29:25.129398   43874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:29:25.144230   43874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:29:25.164306   43874 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0708 20:29:25.164359   43874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:29:25.164423   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.176254   43874 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:29:25.176317   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.187522   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.198819   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.209967   43874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:29:25.221585   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.233026   43874 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.245503   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.257042   43874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:29:25.267287   43874 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0708 20:29:25.267363   43874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:29:25.277423   43874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:29:25.425906   43874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:29:29.219283   43874 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.793339419s)
	I0708 20:29:29.219319   43874 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:29:29.219369   43874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:29:29.224792   43874 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0708 20:29:29.224817   43874 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0708 20:29:29.224824   43874 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I0708 20:29:29.224830   43874 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0708 20:29:29.224835   43874 command_runner.go:130] > Access: 2024-07-08 20:29:29.053131358 +0000
	I0708 20:29:29.224840   43874 command_runner.go:130] > Modify: 2024-07-08 20:29:29.053131358 +0000
	I0708 20:29:29.224845   43874 command_runner.go:130] > Change: 2024-07-08 20:29:29.053131358 +0000
	I0708 20:29:29.224848   43874 command_runner.go:130] >  Birth: -
	I0708 20:29:29.224998   43874 start.go:562] Will wait 60s for crictl version
	I0708 20:29:29.225073   43874 ssh_runner.go:195] Run: which crictl
	I0708 20:29:29.229434   43874 command_runner.go:130] > /usr/bin/crictl
	I0708 20:29:29.229520   43874 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:29:29.272608   43874 command_runner.go:130] > Version:  0.1.0
	I0708 20:29:29.272636   43874 command_runner.go:130] > RuntimeName:  cri-o
	I0708 20:29:29.272641   43874 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0708 20:29:29.272648   43874 command_runner.go:130] > RuntimeApiVersion:  v1
	I0708 20:29:29.272673   43874 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:29:29.272761   43874 ssh_runner.go:195] Run: crio --version
	I0708 20:29:29.307075   43874 command_runner.go:130] > crio version 1.29.1
	I0708 20:29:29.307102   43874 command_runner.go:130] > Version:        1.29.1
	I0708 20:29:29.307116   43874 command_runner.go:130] > GitCommit:      unknown
	I0708 20:29:29.307124   43874 command_runner.go:130] > GitCommitDate:  unknown
	I0708 20:29:29.307131   43874 command_runner.go:130] > GitTreeState:   clean
	I0708 20:29:29.307140   43874 command_runner.go:130] > BuildDate:      2024-07-03T18:31:34Z
	I0708 20:29:29.307147   43874 command_runner.go:130] > GoVersion:      go1.21.6
	I0708 20:29:29.307153   43874 command_runner.go:130] > Compiler:       gc
	I0708 20:29:29.307160   43874 command_runner.go:130] > Platform:       linux/amd64
	I0708 20:29:29.307179   43874 command_runner.go:130] > Linkmode:       dynamic
	I0708 20:29:29.307185   43874 command_runner.go:130] > BuildTags:      
	I0708 20:29:29.307196   43874 command_runner.go:130] >   containers_image_ostree_stub
	I0708 20:29:29.307203   43874 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0708 20:29:29.307210   43874 command_runner.go:130] >   btrfs_noversion
	I0708 20:29:29.307216   43874 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0708 20:29:29.307225   43874 command_runner.go:130] >   libdm_no_deferred_remove
	I0708 20:29:29.307228   43874 command_runner.go:130] >   seccomp
	I0708 20:29:29.307232   43874 command_runner.go:130] > LDFlags:          unknown
	I0708 20:29:29.307236   43874 command_runner.go:130] > SeccompEnabled:   true
	I0708 20:29:29.307240   43874 command_runner.go:130] > AppArmorEnabled:  false
	I0708 20:29:29.307303   43874 ssh_runner.go:195] Run: crio --version
	I0708 20:29:29.337383   43874 command_runner.go:130] > crio version 1.29.1
	I0708 20:29:29.337405   43874 command_runner.go:130] > Version:        1.29.1
	I0708 20:29:29.337410   43874 command_runner.go:130] > GitCommit:      unknown
	I0708 20:29:29.337414   43874 command_runner.go:130] > GitCommitDate:  unknown
	I0708 20:29:29.337418   43874 command_runner.go:130] > GitTreeState:   clean
	I0708 20:29:29.337423   43874 command_runner.go:130] > BuildDate:      2024-07-03T18:31:34Z
	I0708 20:29:29.337427   43874 command_runner.go:130] > GoVersion:      go1.21.6
	I0708 20:29:29.337431   43874 command_runner.go:130] > Compiler:       gc
	I0708 20:29:29.337435   43874 command_runner.go:130] > Platform:       linux/amd64
	I0708 20:29:29.337440   43874 command_runner.go:130] > Linkmode:       dynamic
	I0708 20:29:29.337444   43874 command_runner.go:130] > BuildTags:      
	I0708 20:29:29.337448   43874 command_runner.go:130] >   containers_image_ostree_stub
	I0708 20:29:29.337452   43874 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0708 20:29:29.337456   43874 command_runner.go:130] >   btrfs_noversion
	I0708 20:29:29.337463   43874 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0708 20:29:29.337469   43874 command_runner.go:130] >   libdm_no_deferred_remove
	I0708 20:29:29.337473   43874 command_runner.go:130] >   seccomp
	I0708 20:29:29.337479   43874 command_runner.go:130] > LDFlags:          unknown
	I0708 20:29:29.337491   43874 command_runner.go:130] > SeccompEnabled:   true
	I0708 20:29:29.337496   43874 command_runner.go:130] > AppArmorEnabled:  false
	I0708 20:29:29.340162   43874 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:29:29.341543   43874 main.go:141] libmachine: (multinode-957088) Calling .GetIP
	I0708 20:29:29.344351   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:29.344741   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:29.344770   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:29.344905   43874 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:29:29.349312   43874 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0708 20:29:29.349469   43874 kubeadm.go:877] updating cluster {Name:multinode-957088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-957088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:29:29.349610   43874 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:29:29.349659   43874 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:29:29.397121   43874 command_runner.go:130] > {
	I0708 20:29:29.397146   43874 command_runner.go:130] >   "images": [
	I0708 20:29:29.397152   43874 command_runner.go:130] >     {
	I0708 20:29:29.397165   43874 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0708 20:29:29.397172   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397204   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0708 20:29:29.397214   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397220   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.397241   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0708 20:29:29.397255   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0708 20:29:29.397264   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397275   43874 command_runner.go:130] >       "size": "65908273",
	I0708 20:29:29.397282   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.397292   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.397305   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.397322   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.397331   43874 command_runner.go:130] >     },
	I0708 20:29:29.397336   43874 command_runner.go:130] >     {
	I0708 20:29:29.397350   43874 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0708 20:29:29.397361   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397372   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0708 20:29:29.397380   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397388   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.397402   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0708 20:29:29.397416   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0708 20:29:29.397424   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397431   43874 command_runner.go:130] >       "size": "1363676",
	I0708 20:29:29.397439   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.397448   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.397457   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.397465   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.397473   43874 command_runner.go:130] >     },
	I0708 20:29:29.397481   43874 command_runner.go:130] >     {
	I0708 20:29:29.397491   43874 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0708 20:29:29.397499   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397509   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0708 20:29:29.397517   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397523   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.397536   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0708 20:29:29.397549   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0708 20:29:29.397557   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397564   43874 command_runner.go:130] >       "size": "31470524",
	I0708 20:29:29.397572   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.397586   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.397595   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.397603   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.397617   43874 command_runner.go:130] >     },
	I0708 20:29:29.397624   43874 command_runner.go:130] >     {
	I0708 20:29:29.397632   43874 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0708 20:29:29.397641   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397652   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0708 20:29:29.397667   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397675   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.397688   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0708 20:29:29.397727   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0708 20:29:29.397736   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397743   43874 command_runner.go:130] >       "size": "61245718",
	I0708 20:29:29.397751   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.397760   43874 command_runner.go:130] >       "username": "nonroot",
	I0708 20:29:29.397769   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.397775   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.397784   43874 command_runner.go:130] >     },
	I0708 20:29:29.397792   43874 command_runner.go:130] >     {
	I0708 20:29:29.397804   43874 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0708 20:29:29.397813   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397823   43874 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0708 20:29:29.397832   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397839   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.397852   43874 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0708 20:29:29.397866   43874 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0708 20:29:29.397875   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397884   43874 command_runner.go:130] >       "size": "150779692",
	I0708 20:29:29.397893   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.397900   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.397907   43874 command_runner.go:130] >       },
	I0708 20:29:29.397917   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.397926   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.397935   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.397943   43874 command_runner.go:130] >     },
	I0708 20:29:29.397951   43874 command_runner.go:130] >     {
	I0708 20:29:29.397960   43874 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0708 20:29:29.397968   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397981   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0708 20:29:29.397988   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397993   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.398008   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0708 20:29:29.398023   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0708 20:29:29.398038   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398047   43874 command_runner.go:130] >       "size": "117609954",
	I0708 20:29:29.398052   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.398060   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.398065   43874 command_runner.go:130] >       },
	I0708 20:29:29.398073   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.398079   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.398087   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.398093   43874 command_runner.go:130] >     },
	I0708 20:29:29.398100   43874 command_runner.go:130] >     {
	I0708 20:29:29.398114   43874 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0708 20:29:29.398123   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.398135   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0708 20:29:29.398139   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398145   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.398157   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0708 20:29:29.398169   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0708 20:29:29.398177   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398187   43874 command_runner.go:130] >       "size": "112194888",
	I0708 20:29:29.398195   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.398203   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.398211   43874 command_runner.go:130] >       },
	I0708 20:29:29.398216   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.398224   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.398231   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.398238   43874 command_runner.go:130] >     },
	I0708 20:29:29.398242   43874 command_runner.go:130] >     {
	I0708 20:29:29.398254   43874 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0708 20:29:29.398263   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.398273   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0708 20:29:29.398280   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398286   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.398322   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0708 20:29:29.398337   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0708 20:29:29.398342   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398348   43874 command_runner.go:130] >       "size": "85953433",
	I0708 20:29:29.398360   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.398366   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.398372   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.398378   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.398383   43874 command_runner.go:130] >     },
	I0708 20:29:29.398387   43874 command_runner.go:130] >     {
	I0708 20:29:29.398396   43874 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0708 20:29:29.398402   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.398409   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0708 20:29:29.398415   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398421   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.398436   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0708 20:29:29.398450   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0708 20:29:29.398458   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398464   43874 command_runner.go:130] >       "size": "63051080",
	I0708 20:29:29.398472   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.398479   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.398487   43874 command_runner.go:130] >       },
	I0708 20:29:29.398496   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.398505   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.398514   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.398521   43874 command_runner.go:130] >     },
	I0708 20:29:29.398524   43874 command_runner.go:130] >     {
	I0708 20:29:29.398534   43874 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0708 20:29:29.398541   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.398545   43874 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0708 20:29:29.398549   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398553   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.398566   43874 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0708 20:29:29.398579   43874 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0708 20:29:29.398587   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398594   43874 command_runner.go:130] >       "size": "750414",
	I0708 20:29:29.398604   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.398611   43874 command_runner.go:130] >         "value": "65535"
	I0708 20:29:29.398620   43874 command_runner.go:130] >       },
	I0708 20:29:29.398627   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.398639   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.398646   43874 command_runner.go:130] >       "pinned": true
	I0708 20:29:29.398649   43874 command_runner.go:130] >     }
	I0708 20:29:29.398655   43874 command_runner.go:130] >   ]
	I0708 20:29:29.398658   43874 command_runner.go:130] > }
	I0708 20:29:29.398876   43874 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:29:29.398900   43874 crio.go:433] Images already preloaded, skipping extraction
	I0708 20:29:29.398951   43874 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:29:29.435103   43874 command_runner.go:130] > {
	I0708 20:29:29.435133   43874 command_runner.go:130] >   "images": [
	I0708 20:29:29.435138   43874 command_runner.go:130] >     {
	I0708 20:29:29.435145   43874 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0708 20:29:29.435150   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435156   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0708 20:29:29.435160   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435164   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435179   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0708 20:29:29.435191   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0708 20:29:29.435197   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435205   43874 command_runner.go:130] >       "size": "65908273",
	I0708 20:29:29.435215   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.435227   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.435241   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435252   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435268   43874 command_runner.go:130] >     },
	I0708 20:29:29.435277   43874 command_runner.go:130] >     {
	I0708 20:29:29.435287   43874 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0708 20:29:29.435306   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435320   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0708 20:29:29.435326   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435334   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435346   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0708 20:29:29.435357   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0708 20:29:29.435364   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435369   43874 command_runner.go:130] >       "size": "1363676",
	I0708 20:29:29.435375   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.435384   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.435392   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435396   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435403   43874 command_runner.go:130] >     },
	I0708 20:29:29.435407   43874 command_runner.go:130] >     {
	I0708 20:29:29.435415   43874 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0708 20:29:29.435422   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435427   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0708 20:29:29.435434   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435438   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435461   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0708 20:29:29.435479   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0708 20:29:29.435489   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435496   43874 command_runner.go:130] >       "size": "31470524",
	I0708 20:29:29.435505   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.435510   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.435515   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435519   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435525   43874 command_runner.go:130] >     },
	I0708 20:29:29.435529   43874 command_runner.go:130] >     {
	I0708 20:29:29.435538   43874 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0708 20:29:29.435562   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435576   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0708 20:29:29.435583   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435594   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435612   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0708 20:29:29.435629   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0708 20:29:29.435636   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435646   43874 command_runner.go:130] >       "size": "61245718",
	I0708 20:29:29.435659   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.435667   43874 command_runner.go:130] >       "username": "nonroot",
	I0708 20:29:29.435671   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435675   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435682   43874 command_runner.go:130] >     },
	I0708 20:29:29.435686   43874 command_runner.go:130] >     {
	I0708 20:29:29.435695   43874 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0708 20:29:29.435705   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435717   43874 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0708 20:29:29.435727   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435737   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435751   43874 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0708 20:29:29.435766   43874 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0708 20:29:29.435775   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435786   43874 command_runner.go:130] >       "size": "150779692",
	I0708 20:29:29.435795   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.435802   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.435806   43874 command_runner.go:130] >       },
	I0708 20:29:29.435813   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.435817   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435830   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435836   43874 command_runner.go:130] >     },
	I0708 20:29:29.435840   43874 command_runner.go:130] >     {
	I0708 20:29:29.435848   43874 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0708 20:29:29.435855   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435861   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0708 20:29:29.435867   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435872   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435882   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0708 20:29:29.435893   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0708 20:29:29.435899   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435910   43874 command_runner.go:130] >       "size": "117609954",
	I0708 20:29:29.435917   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.435921   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.435927   43874 command_runner.go:130] >       },
	I0708 20:29:29.435931   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.435936   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435940   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435946   43874 command_runner.go:130] >     },
	I0708 20:29:29.435950   43874 command_runner.go:130] >     {
	I0708 20:29:29.435958   43874 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0708 20:29:29.435965   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435971   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0708 20:29:29.435977   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435981   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435991   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0708 20:29:29.436002   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0708 20:29:29.436008   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436013   43874 command_runner.go:130] >       "size": "112194888",
	I0708 20:29:29.436020   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.436024   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.436031   43874 command_runner.go:130] >       },
	I0708 20:29:29.436035   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.436042   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.436047   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.436053   43874 command_runner.go:130] >     },
	I0708 20:29:29.436057   43874 command_runner.go:130] >     {
	I0708 20:29:29.436063   43874 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0708 20:29:29.436069   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.436076   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0708 20:29:29.436084   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436088   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.436112   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0708 20:29:29.436122   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0708 20:29:29.436128   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436133   43874 command_runner.go:130] >       "size": "85953433",
	I0708 20:29:29.436139   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.436148   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.436156   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.436160   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.436166   43874 command_runner.go:130] >     },
	I0708 20:29:29.436170   43874 command_runner.go:130] >     {
	I0708 20:29:29.436179   43874 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0708 20:29:29.436185   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.436191   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0708 20:29:29.436197   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436201   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.436211   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0708 20:29:29.436220   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0708 20:29:29.436226   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436231   43874 command_runner.go:130] >       "size": "63051080",
	I0708 20:29:29.436234   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.436242   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.436246   43874 command_runner.go:130] >       },
	I0708 20:29:29.436256   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.436263   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.436267   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.436273   43874 command_runner.go:130] >     },
	I0708 20:29:29.436277   43874 command_runner.go:130] >     {
	I0708 20:29:29.436286   43874 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0708 20:29:29.436293   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.436298   43874 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0708 20:29:29.436304   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436308   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.436318   43874 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0708 20:29:29.436325   43874 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0708 20:29:29.436332   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436336   43874 command_runner.go:130] >       "size": "750414",
	I0708 20:29:29.436343   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.436348   43874 command_runner.go:130] >         "value": "65535"
	I0708 20:29:29.436354   43874 command_runner.go:130] >       },
	I0708 20:29:29.436358   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.436364   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.436375   43874 command_runner.go:130] >       "pinned": true
	I0708 20:29:29.436382   43874 command_runner.go:130] >     }
	I0708 20:29:29.436386   43874 command_runner.go:130] >   ]
	I0708 20:29:29.436392   43874 command_runner.go:130] > }
	I0708 20:29:29.436516   43874 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:29:29.436530   43874 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:29:29.436537   43874 kubeadm.go:928] updating node { 192.168.39.44 8443 v1.30.2 crio true true} ...
	I0708 20:29:29.436646   43874 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-957088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-957088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:29:29.436721   43874 ssh_runner.go:195] Run: crio config
	I0708 20:29:29.471430   43874 command_runner.go:130] ! time="2024-07-08 20:29:29.440929795Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0708 20:29:29.476995   43874 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0708 20:29:29.489477   43874 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0708 20:29:29.489505   43874 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0708 20:29:29.489516   43874 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0708 20:29:29.489521   43874 command_runner.go:130] > #
	I0708 20:29:29.489531   43874 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0708 20:29:29.489541   43874 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0708 20:29:29.489553   43874 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0708 20:29:29.489566   43874 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0708 20:29:29.489574   43874 command_runner.go:130] > # reload'.
	I0708 20:29:29.489585   43874 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0708 20:29:29.489597   43874 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0708 20:29:29.489610   43874 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0708 20:29:29.489622   43874 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0708 20:29:29.489630   43874 command_runner.go:130] > [crio]
	I0708 20:29:29.489639   43874 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0708 20:29:29.489649   43874 command_runner.go:130] > # containers images, in this directory.
	I0708 20:29:29.489658   43874 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0708 20:29:29.489677   43874 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0708 20:29:29.489687   43874 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0708 20:29:29.489701   43874 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0708 20:29:29.489710   43874 command_runner.go:130] > # imagestore = ""
	I0708 20:29:29.489723   43874 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0708 20:29:29.489735   43874 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0708 20:29:29.489743   43874 command_runner.go:130] > storage_driver = "overlay"
	I0708 20:29:29.489755   43874 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0708 20:29:29.489766   43874 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0708 20:29:29.489775   43874 command_runner.go:130] > storage_option = [
	I0708 20:29:29.489785   43874 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0708 20:29:29.489793   43874 command_runner.go:130] > ]
	I0708 20:29:29.489804   43874 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0708 20:29:29.489816   43874 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0708 20:29:29.489846   43874 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0708 20:29:29.489858   43874 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0708 20:29:29.489868   43874 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0708 20:29:29.489877   43874 command_runner.go:130] > # always happen on a node reboot
	I0708 20:29:29.489884   43874 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0708 20:29:29.489895   43874 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0708 20:29:29.489903   43874 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0708 20:29:29.489908   43874 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0708 20:29:29.489916   43874 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0708 20:29:29.489923   43874 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0708 20:29:29.489933   43874 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0708 20:29:29.489939   43874 command_runner.go:130] > # internal_wipe = true
	I0708 20:29:29.489947   43874 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0708 20:29:29.489954   43874 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0708 20:29:29.489959   43874 command_runner.go:130] > # internal_repair = false
	I0708 20:29:29.489964   43874 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0708 20:29:29.489972   43874 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0708 20:29:29.489980   43874 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0708 20:29:29.489985   43874 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0708 20:29:29.489992   43874 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0708 20:29:29.489996   43874 command_runner.go:130] > [crio.api]
	I0708 20:29:29.490003   43874 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0708 20:29:29.490008   43874 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0708 20:29:29.490015   43874 command_runner.go:130] > # IP address on which the stream server will listen.
	I0708 20:29:29.490019   43874 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0708 20:29:29.490028   43874 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0708 20:29:29.490035   43874 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0708 20:29:29.490039   43874 command_runner.go:130] > # stream_port = "0"
	I0708 20:29:29.490045   43874 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0708 20:29:29.490051   43874 command_runner.go:130] > # stream_enable_tls = false
	I0708 20:29:29.490056   43874 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0708 20:29:29.490063   43874 command_runner.go:130] > # stream_idle_timeout = ""
	I0708 20:29:29.490069   43874 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0708 20:29:29.490075   43874 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0708 20:29:29.490081   43874 command_runner.go:130] > # minutes.
	I0708 20:29:29.490085   43874 command_runner.go:130] > # stream_tls_cert = ""
	I0708 20:29:29.490098   43874 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0708 20:29:29.490107   43874 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0708 20:29:29.490118   43874 command_runner.go:130] > # stream_tls_key = ""
	I0708 20:29:29.490126   43874 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0708 20:29:29.490133   43874 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0708 20:29:29.490154   43874 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0708 20:29:29.490161   43874 command_runner.go:130] > # stream_tls_ca = ""
	I0708 20:29:29.490174   43874 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0708 20:29:29.490181   43874 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0708 20:29:29.490187   43874 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0708 20:29:29.490194   43874 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0708 20:29:29.490200   43874 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0708 20:29:29.490207   43874 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0708 20:29:29.490211   43874 command_runner.go:130] > [crio.runtime]
	I0708 20:29:29.490219   43874 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0708 20:29:29.490226   43874 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0708 20:29:29.490230   43874 command_runner.go:130] > # "nofile=1024:2048"
	I0708 20:29:29.490238   43874 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0708 20:29:29.490244   43874 command_runner.go:130] > # default_ulimits = [
	I0708 20:29:29.490247   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490253   43874 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0708 20:29:29.490259   43874 command_runner.go:130] > # no_pivot = false
	I0708 20:29:29.490264   43874 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0708 20:29:29.490272   43874 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0708 20:29:29.490280   43874 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0708 20:29:29.490285   43874 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0708 20:29:29.490293   43874 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0708 20:29:29.490299   43874 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0708 20:29:29.490305   43874 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0708 20:29:29.490310   43874 command_runner.go:130] > # Cgroup setting for conmon
	I0708 20:29:29.490318   43874 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0708 20:29:29.490323   43874 command_runner.go:130] > conmon_cgroup = "pod"
	I0708 20:29:29.490330   43874 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0708 20:29:29.490337   43874 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0708 20:29:29.490343   43874 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0708 20:29:29.490348   43874 command_runner.go:130] > conmon_env = [
	I0708 20:29:29.490358   43874 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0708 20:29:29.490364   43874 command_runner.go:130] > ]
	I0708 20:29:29.490369   43874 command_runner.go:130] > # Additional environment variables to set for all the
	I0708 20:29:29.490376   43874 command_runner.go:130] > # containers. These are overridden if set in the
	I0708 20:29:29.490381   43874 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0708 20:29:29.490387   43874 command_runner.go:130] > # default_env = [
	I0708 20:29:29.490390   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490398   43874 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0708 20:29:29.490405   43874 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0708 20:29:29.490411   43874 command_runner.go:130] > # selinux = false
	I0708 20:29:29.490417   43874 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0708 20:29:29.490425   43874 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0708 20:29:29.490431   43874 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0708 20:29:29.490437   43874 command_runner.go:130] > # seccomp_profile = ""
	I0708 20:29:29.490442   43874 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0708 20:29:29.490450   43874 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0708 20:29:29.490460   43874 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0708 20:29:29.490466   43874 command_runner.go:130] > # which might increase security.
	I0708 20:29:29.490471   43874 command_runner.go:130] > # This option is currently deprecated,
	I0708 20:29:29.490479   43874 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0708 20:29:29.490486   43874 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0708 20:29:29.490492   43874 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0708 20:29:29.490500   43874 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0708 20:29:29.490508   43874 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0708 20:29:29.490516   43874 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0708 20:29:29.490523   43874 command_runner.go:130] > # This option supports live configuration reload.
	I0708 20:29:29.490528   43874 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0708 20:29:29.490536   43874 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0708 20:29:29.490542   43874 command_runner.go:130] > # the cgroup blockio controller.
	I0708 20:29:29.490546   43874 command_runner.go:130] > # blockio_config_file = ""
	I0708 20:29:29.490555   43874 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0708 20:29:29.490561   43874 command_runner.go:130] > # blockio parameters.
	I0708 20:29:29.490564   43874 command_runner.go:130] > # blockio_reload = false
	I0708 20:29:29.490572   43874 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0708 20:29:29.490578   43874 command_runner.go:130] > # irqbalance daemon.
	I0708 20:29:29.490583   43874 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0708 20:29:29.490595   43874 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0708 20:29:29.490604   43874 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0708 20:29:29.490612   43874 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0708 20:29:29.490620   43874 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0708 20:29:29.490627   43874 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0708 20:29:29.490634   43874 command_runner.go:130] > # This option supports live configuration reload.
	I0708 20:29:29.490638   43874 command_runner.go:130] > # rdt_config_file = ""
	I0708 20:29:29.490644   43874 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0708 20:29:29.490650   43874 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0708 20:29:29.490681   43874 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0708 20:29:29.490687   43874 command_runner.go:130] > # separate_pull_cgroup = ""
	I0708 20:29:29.490694   43874 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0708 20:29:29.490702   43874 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0708 20:29:29.490709   43874 command_runner.go:130] > # will be added.
	I0708 20:29:29.490716   43874 command_runner.go:130] > # default_capabilities = [
	I0708 20:29:29.490724   43874 command_runner.go:130] > # 	"CHOWN",
	I0708 20:29:29.490729   43874 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0708 20:29:29.490738   43874 command_runner.go:130] > # 	"FSETID",
	I0708 20:29:29.490746   43874 command_runner.go:130] > # 	"FOWNER",
	I0708 20:29:29.490755   43874 command_runner.go:130] > # 	"SETGID",
	I0708 20:29:29.490761   43874 command_runner.go:130] > # 	"SETUID",
	I0708 20:29:29.490769   43874 command_runner.go:130] > # 	"SETPCAP",
	I0708 20:29:29.490775   43874 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0708 20:29:29.490798   43874 command_runner.go:130] > # 	"KILL",
	I0708 20:29:29.490805   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490812   43874 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0708 20:29:29.490821   43874 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0708 20:29:29.490827   43874 command_runner.go:130] > # add_inheritable_capabilities = false
	I0708 20:29:29.490834   43874 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0708 20:29:29.490842   43874 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0708 20:29:29.490847   43874 command_runner.go:130] > default_sysctls = [
	I0708 20:29:29.490853   43874 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0708 20:29:29.490859   43874 command_runner.go:130] > ]
	I0708 20:29:29.490865   43874 command_runner.go:130] > # List of devices on the host that a
	I0708 20:29:29.490873   43874 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0708 20:29:29.490880   43874 command_runner.go:130] > # allowed_devices = [
	I0708 20:29:29.490888   43874 command_runner.go:130] > # 	"/dev/fuse",
	I0708 20:29:29.490893   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490898   43874 command_runner.go:130] > # List of additional devices. specified as
	I0708 20:29:29.490905   43874 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0708 20:29:29.490913   43874 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0708 20:29:29.490918   43874 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0708 20:29:29.490924   43874 command_runner.go:130] > # additional_devices = [
	I0708 20:29:29.490928   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490933   43874 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0708 20:29:29.490938   43874 command_runner.go:130] > # cdi_spec_dirs = [
	I0708 20:29:29.490941   43874 command_runner.go:130] > # 	"/etc/cdi",
	I0708 20:29:29.490946   43874 command_runner.go:130] > # 	"/var/run/cdi",
	I0708 20:29:29.490949   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490955   43874 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0708 20:29:29.490963   43874 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0708 20:29:29.490969   43874 command_runner.go:130] > # Defaults to false.
	I0708 20:29:29.490974   43874 command_runner.go:130] > # device_ownership_from_security_context = false
	I0708 20:29:29.490981   43874 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0708 20:29:29.490987   43874 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0708 20:29:29.490992   43874 command_runner.go:130] > # hooks_dir = [
	I0708 20:29:29.490997   43874 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0708 20:29:29.491002   43874 command_runner.go:130] > # ]
	I0708 20:29:29.491008   43874 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0708 20:29:29.491016   43874 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0708 20:29:29.491023   43874 command_runner.go:130] > # its default mounts from the following two files:
	I0708 20:29:29.491027   43874 command_runner.go:130] > #
	I0708 20:29:29.491033   43874 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0708 20:29:29.491040   43874 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0708 20:29:29.491047   43874 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0708 20:29:29.491051   43874 command_runner.go:130] > #
	I0708 20:29:29.491057   43874 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0708 20:29:29.491065   43874 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0708 20:29:29.491072   43874 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0708 20:29:29.491082   43874 command_runner.go:130] > #      only add mounts it finds in this file.
	I0708 20:29:29.491088   43874 command_runner.go:130] > #
	I0708 20:29:29.491092   43874 command_runner.go:130] > # default_mounts_file = ""
	I0708 20:29:29.491104   43874 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0708 20:29:29.491117   43874 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0708 20:29:29.491123   43874 command_runner.go:130] > pids_limit = 1024
	I0708 20:29:29.491130   43874 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0708 20:29:29.491138   43874 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0708 20:29:29.491143   43874 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0708 20:29:29.491153   43874 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0708 20:29:29.491157   43874 command_runner.go:130] > # log_size_max = -1
	I0708 20:29:29.491164   43874 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0708 20:29:29.491170   43874 command_runner.go:130] > # log_to_journald = false
	I0708 20:29:29.491176   43874 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0708 20:29:29.491183   43874 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0708 20:29:29.491188   43874 command_runner.go:130] > # Path to directory for container attach sockets.
	I0708 20:29:29.491195   43874 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0708 20:29:29.491203   43874 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0708 20:29:29.491209   43874 command_runner.go:130] > # bind_mount_prefix = ""
	I0708 20:29:29.491214   43874 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0708 20:29:29.491220   43874 command_runner.go:130] > # read_only = false
	I0708 20:29:29.491226   43874 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0708 20:29:29.491234   43874 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0708 20:29:29.491238   43874 command_runner.go:130] > # live configuration reload.
	I0708 20:29:29.491244   43874 command_runner.go:130] > # log_level = "info"
	I0708 20:29:29.491250   43874 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0708 20:29:29.491257   43874 command_runner.go:130] > # This option supports live configuration reload.
	I0708 20:29:29.491261   43874 command_runner.go:130] > # log_filter = ""
	I0708 20:29:29.491267   43874 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0708 20:29:29.491281   43874 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0708 20:29:29.491287   43874 command_runner.go:130] > # separated by comma.
	I0708 20:29:29.491295   43874 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0708 20:29:29.491301   43874 command_runner.go:130] > # uid_mappings = ""
	I0708 20:29:29.491306   43874 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0708 20:29:29.491314   43874 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0708 20:29:29.491318   43874 command_runner.go:130] > # separated by comma.
	I0708 20:29:29.491326   43874 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0708 20:29:29.491332   43874 command_runner.go:130] > # gid_mappings = ""
	I0708 20:29:29.491337   43874 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0708 20:29:29.491349   43874 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0708 20:29:29.491357   43874 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0708 20:29:29.491367   43874 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0708 20:29:29.491373   43874 command_runner.go:130] > # minimum_mappable_uid = -1
	I0708 20:29:29.491379   43874 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0708 20:29:29.491387   43874 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0708 20:29:29.491395   43874 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0708 20:29:29.491402   43874 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0708 20:29:29.491408   43874 command_runner.go:130] > # minimum_mappable_gid = -1
	I0708 20:29:29.491413   43874 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0708 20:29:29.491421   43874 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0708 20:29:29.491427   43874 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0708 20:29:29.491433   43874 command_runner.go:130] > # ctr_stop_timeout = 30
	I0708 20:29:29.491439   43874 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0708 20:29:29.491446   43874 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0708 20:29:29.491470   43874 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0708 20:29:29.491478   43874 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0708 20:29:29.491486   43874 command_runner.go:130] > drop_infra_ctr = false
	I0708 20:29:29.491491   43874 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0708 20:29:29.491499   43874 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0708 20:29:29.491508   43874 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0708 20:29:29.491514   43874 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0708 20:29:29.491521   43874 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0708 20:29:29.491528   43874 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0708 20:29:29.491533   43874 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0708 20:29:29.491540   43874 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0708 20:29:29.491544   43874 command_runner.go:130] > # shared_cpuset = ""
	I0708 20:29:29.491552   43874 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0708 20:29:29.491557   43874 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0708 20:29:29.491563   43874 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0708 20:29:29.491570   43874 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0708 20:29:29.491576   43874 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0708 20:29:29.491581   43874 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0708 20:29:29.491589   43874 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0708 20:29:29.491595   43874 command_runner.go:130] > # enable_criu_support = false
	I0708 20:29:29.491599   43874 command_runner.go:130] > # Enable/disable the generation of the container,
	I0708 20:29:29.491614   43874 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0708 20:29:29.491620   43874 command_runner.go:130] > # enable_pod_events = false
	I0708 20:29:29.491626   43874 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0708 20:29:29.491634   43874 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0708 20:29:29.491639   43874 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0708 20:29:29.491646   43874 command_runner.go:130] > # default_runtime = "runc"
	I0708 20:29:29.491650   43874 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0708 20:29:29.491659   43874 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0708 20:29:29.491670   43874 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0708 20:29:29.491677   43874 command_runner.go:130] > # creation as a file is not desired either.
	I0708 20:29:29.491685   43874 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0708 20:29:29.491692   43874 command_runner.go:130] > # the hostname is being managed dynamically.
	I0708 20:29:29.491696   43874 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0708 20:29:29.491701   43874 command_runner.go:130] > # ]
	I0708 20:29:29.491711   43874 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0708 20:29:29.491723   43874 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0708 20:29:29.491734   43874 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0708 20:29:29.491744   43874 command_runner.go:130] > # Each entry in the table should follow the format:
	I0708 20:29:29.491752   43874 command_runner.go:130] > #
	I0708 20:29:29.491759   43874 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0708 20:29:29.491769   43874 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0708 20:29:29.491825   43874 command_runner.go:130] > # runtime_type = "oci"
	I0708 20:29:29.491834   43874 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0708 20:29:29.491838   43874 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0708 20:29:29.491842   43874 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0708 20:29:29.491847   43874 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0708 20:29:29.491850   43874 command_runner.go:130] > # monitor_env = []
	I0708 20:29:29.491855   43874 command_runner.go:130] > # privileged_without_host_devices = false
	I0708 20:29:29.491859   43874 command_runner.go:130] > # allowed_annotations = []
	I0708 20:29:29.491865   43874 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0708 20:29:29.491869   43874 command_runner.go:130] > # Where:
	I0708 20:29:29.491877   43874 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0708 20:29:29.491882   43874 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0708 20:29:29.491891   43874 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0708 20:29:29.491899   43874 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0708 20:29:29.491905   43874 command_runner.go:130] > #   in $PATH.
	I0708 20:29:29.491915   43874 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0708 20:29:29.491923   43874 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0708 20:29:29.491931   43874 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0708 20:29:29.491936   43874 command_runner.go:130] > #   state.
	I0708 20:29:29.491942   43874 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0708 20:29:29.491950   43874 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0708 20:29:29.491957   43874 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0708 20:29:29.491964   43874 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0708 20:29:29.491972   43874 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0708 20:29:29.491981   43874 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0708 20:29:29.491987   43874 command_runner.go:130] > #   The currently recognized values are:
	I0708 20:29:29.491994   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0708 20:29:29.492003   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0708 20:29:29.492011   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0708 20:29:29.492017   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0708 20:29:29.492027   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0708 20:29:29.492036   43874 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0708 20:29:29.492042   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0708 20:29:29.492050   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0708 20:29:29.492055   43874 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0708 20:29:29.492063   43874 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0708 20:29:29.492069   43874 command_runner.go:130] > #   deprecated option "conmon".
	I0708 20:29:29.492076   43874 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0708 20:29:29.492083   43874 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0708 20:29:29.492089   43874 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0708 20:29:29.492096   43874 command_runner.go:130] > #   should be moved to the container's cgroup
	I0708 20:29:29.492102   43874 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0708 20:29:29.492109   43874 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0708 20:29:29.492121   43874 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0708 20:29:29.492126   43874 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0708 20:29:29.492131   43874 command_runner.go:130] > #
	I0708 20:29:29.492136   43874 command_runner.go:130] > # Using the seccomp notifier feature:
	I0708 20:29:29.492142   43874 command_runner.go:130] > #
	I0708 20:29:29.492147   43874 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0708 20:29:29.492158   43874 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0708 20:29:29.492163   43874 command_runner.go:130] > #
	I0708 20:29:29.492175   43874 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0708 20:29:29.492183   43874 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0708 20:29:29.492187   43874 command_runner.go:130] > #
	I0708 20:29:29.492193   43874 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0708 20:29:29.492198   43874 command_runner.go:130] > # feature.
	I0708 20:29:29.492202   43874 command_runner.go:130] > #
	I0708 20:29:29.492208   43874 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0708 20:29:29.492216   43874 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0708 20:29:29.492224   43874 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0708 20:29:29.492229   43874 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0708 20:29:29.492237   43874 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0708 20:29:29.492242   43874 command_runner.go:130] > #
	I0708 20:29:29.492247   43874 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0708 20:29:29.492255   43874 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0708 20:29:29.492259   43874 command_runner.go:130] > #
	I0708 20:29:29.492265   43874 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0708 20:29:29.492272   43874 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0708 20:29:29.492275   43874 command_runner.go:130] > #
	I0708 20:29:29.492281   43874 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0708 20:29:29.492288   43874 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0708 20:29:29.492292   43874 command_runner.go:130] > # limitation.
	I0708 20:29:29.492298   43874 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0708 20:29:29.492305   43874 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0708 20:29:29.492309   43874 command_runner.go:130] > runtime_type = "oci"
	I0708 20:29:29.492314   43874 command_runner.go:130] > runtime_root = "/run/runc"
	I0708 20:29:29.492318   43874 command_runner.go:130] > runtime_config_path = ""
	I0708 20:29:29.492322   43874 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0708 20:29:29.492326   43874 command_runner.go:130] > monitor_cgroup = "pod"
	I0708 20:29:29.492332   43874 command_runner.go:130] > monitor_exec_cgroup = ""
	I0708 20:29:29.492336   43874 command_runner.go:130] > monitor_env = [
	I0708 20:29:29.492344   43874 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0708 20:29:29.492349   43874 command_runner.go:130] > ]
	I0708 20:29:29.492353   43874 command_runner.go:130] > privileged_without_host_devices = false
	I0708 20:29:29.492362   43874 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0708 20:29:29.492367   43874 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0708 20:29:29.492375   43874 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0708 20:29:29.492388   43874 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0708 20:29:29.492398   43874 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0708 20:29:29.492405   43874 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0708 20:29:29.492414   43874 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0708 20:29:29.492423   43874 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0708 20:29:29.492431   43874 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0708 20:29:29.492437   43874 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0708 20:29:29.492442   43874 command_runner.go:130] > # Example:
	I0708 20:29:29.492447   43874 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0708 20:29:29.492453   43874 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0708 20:29:29.492458   43874 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0708 20:29:29.492465   43874 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0708 20:29:29.492468   43874 command_runner.go:130] > # cpuset = 0
	I0708 20:29:29.492474   43874 command_runner.go:130] > # cpushares = "0-1"
	I0708 20:29:29.492478   43874 command_runner.go:130] > # Where:
	I0708 20:29:29.492484   43874 command_runner.go:130] > # The workload name is workload-type.
	I0708 20:29:29.492491   43874 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0708 20:29:29.492499   43874 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0708 20:29:29.492507   43874 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0708 20:29:29.492514   43874 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0708 20:29:29.492521   43874 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0708 20:29:29.492526   43874 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0708 20:29:29.492535   43874 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0708 20:29:29.492545   43874 command_runner.go:130] > # Default value is set to true
	I0708 20:29:29.492552   43874 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0708 20:29:29.492557   43874 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0708 20:29:29.492564   43874 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0708 20:29:29.492568   43874 command_runner.go:130] > # Default value is set to 'false'
	I0708 20:29:29.492574   43874 command_runner.go:130] > # disable_hostport_mapping = false
	I0708 20:29:29.492580   43874 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0708 20:29:29.492584   43874 command_runner.go:130] > #
	I0708 20:29:29.492589   43874 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0708 20:29:29.492595   43874 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0708 20:29:29.492600   43874 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0708 20:29:29.492606   43874 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0708 20:29:29.492611   43874 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0708 20:29:29.492619   43874 command_runner.go:130] > [crio.image]
	I0708 20:29:29.492624   43874 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0708 20:29:29.492628   43874 command_runner.go:130] > # default_transport = "docker://"
	I0708 20:29:29.492634   43874 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0708 20:29:29.492639   43874 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0708 20:29:29.492643   43874 command_runner.go:130] > # global_auth_file = ""
	I0708 20:29:29.492647   43874 command_runner.go:130] > # The image used to instantiate infra containers.
	I0708 20:29:29.492652   43874 command_runner.go:130] > # This option supports live configuration reload.
	I0708 20:29:29.492656   43874 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0708 20:29:29.492661   43874 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0708 20:29:29.492667   43874 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0708 20:29:29.492671   43874 command_runner.go:130] > # This option supports live configuration reload.
	I0708 20:29:29.492675   43874 command_runner.go:130] > # pause_image_auth_file = ""
	I0708 20:29:29.492680   43874 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0708 20:29:29.492686   43874 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0708 20:29:29.492692   43874 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0708 20:29:29.492697   43874 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0708 20:29:29.492700   43874 command_runner.go:130] > # pause_command = "/pause"
	I0708 20:29:29.492708   43874 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0708 20:29:29.492716   43874 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0708 20:29:29.492729   43874 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0708 20:29:29.492740   43874 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0708 20:29:29.492748   43874 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0708 20:29:29.492757   43874 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0708 20:29:29.492763   43874 command_runner.go:130] > # pinned_images = [
	I0708 20:29:29.492767   43874 command_runner.go:130] > # ]
	I0708 20:29:29.492774   43874 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0708 20:29:29.492784   43874 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0708 20:29:29.492796   43874 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0708 20:29:29.492806   43874 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0708 20:29:29.492817   43874 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0708 20:29:29.492826   43874 command_runner.go:130] > # signature_policy = ""
	I0708 20:29:29.492837   43874 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0708 20:29:29.492848   43874 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0708 20:29:29.492858   43874 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0708 20:29:29.492867   43874 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0708 20:29:29.492881   43874 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0708 20:29:29.492889   43874 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0708 20:29:29.492897   43874 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0708 20:29:29.492905   43874 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0708 20:29:29.492911   43874 command_runner.go:130] > # changing them here.
	I0708 20:29:29.492916   43874 command_runner.go:130] > # insecure_registries = [
	I0708 20:29:29.492920   43874 command_runner.go:130] > # ]
	I0708 20:29:29.492926   43874 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0708 20:29:29.492934   43874 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0708 20:29:29.492938   43874 command_runner.go:130] > # image_volumes = "mkdir"
	I0708 20:29:29.492945   43874 command_runner.go:130] > # Temporary directory to use for storing big files
	I0708 20:29:29.492949   43874 command_runner.go:130] > # big_files_temporary_dir = ""
	I0708 20:29:29.492957   43874 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0708 20:29:29.492963   43874 command_runner.go:130] > # CNI plugins.
	I0708 20:29:29.492967   43874 command_runner.go:130] > [crio.network]
	I0708 20:29:29.492974   43874 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0708 20:29:29.492980   43874 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0708 20:29:29.492985   43874 command_runner.go:130] > # cni_default_network = ""
	I0708 20:29:29.492990   43874 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0708 20:29:29.492997   43874 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0708 20:29:29.493002   43874 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0708 20:29:29.493008   43874 command_runner.go:130] > # plugin_dirs = [
	I0708 20:29:29.493012   43874 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0708 20:29:29.493017   43874 command_runner.go:130] > # ]
	I0708 20:29:29.493023   43874 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0708 20:29:29.493029   43874 command_runner.go:130] > [crio.metrics]
	I0708 20:29:29.493038   43874 command_runner.go:130] > # Globally enable or disable metrics support.
	I0708 20:29:29.493044   43874 command_runner.go:130] > enable_metrics = true
	I0708 20:29:29.493049   43874 command_runner.go:130] > # Specify enabled metrics collectors.
	I0708 20:29:29.493055   43874 command_runner.go:130] > # Per default all metrics are enabled.
	I0708 20:29:29.493061   43874 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0708 20:29:29.493069   43874 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0708 20:29:29.493077   43874 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0708 20:29:29.493081   43874 command_runner.go:130] > # metrics_collectors = [
	I0708 20:29:29.493085   43874 command_runner.go:130] > # 	"operations",
	I0708 20:29:29.493092   43874 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0708 20:29:29.493100   43874 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0708 20:29:29.493107   43874 command_runner.go:130] > # 	"operations_errors",
	I0708 20:29:29.493114   43874 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0708 20:29:29.493120   43874 command_runner.go:130] > # 	"image_pulls_by_name",
	I0708 20:29:29.493125   43874 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0708 20:29:29.493131   43874 command_runner.go:130] > # 	"image_pulls_failures",
	I0708 20:29:29.493135   43874 command_runner.go:130] > # 	"image_pulls_successes",
	I0708 20:29:29.493139   43874 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0708 20:29:29.493143   43874 command_runner.go:130] > # 	"image_layer_reuse",
	I0708 20:29:29.493150   43874 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0708 20:29:29.493154   43874 command_runner.go:130] > # 	"containers_oom_total",
	I0708 20:29:29.493159   43874 command_runner.go:130] > # 	"containers_oom",
	I0708 20:29:29.493163   43874 command_runner.go:130] > # 	"processes_defunct",
	I0708 20:29:29.493167   43874 command_runner.go:130] > # 	"operations_total",
	I0708 20:29:29.493173   43874 command_runner.go:130] > # 	"operations_latency_seconds",
	I0708 20:29:29.493177   43874 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0708 20:29:29.493183   43874 command_runner.go:130] > # 	"operations_errors_total",
	I0708 20:29:29.493188   43874 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0708 20:29:29.493195   43874 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0708 20:29:29.493199   43874 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0708 20:29:29.493204   43874 command_runner.go:130] > # 	"image_pulls_success_total",
	I0708 20:29:29.493208   43874 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0708 20:29:29.493213   43874 command_runner.go:130] > # 	"containers_oom_count_total",
	I0708 20:29:29.493219   43874 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0708 20:29:29.493224   43874 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0708 20:29:29.493229   43874 command_runner.go:130] > # ]
	I0708 20:29:29.493233   43874 command_runner.go:130] > # The port on which the metrics server will listen.
	I0708 20:29:29.493239   43874 command_runner.go:130] > # metrics_port = 9090
	I0708 20:29:29.493244   43874 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0708 20:29:29.493250   43874 command_runner.go:130] > # metrics_socket = ""
	I0708 20:29:29.493255   43874 command_runner.go:130] > # The certificate for the secure metrics server.
	I0708 20:29:29.493263   43874 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0708 20:29:29.493272   43874 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0708 20:29:29.493278   43874 command_runner.go:130] > # certificate on any modification event.
	I0708 20:29:29.493282   43874 command_runner.go:130] > # metrics_cert = ""
	I0708 20:29:29.493289   43874 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0708 20:29:29.493299   43874 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0708 20:29:29.493311   43874 command_runner.go:130] > # metrics_key = ""
	I0708 20:29:29.493317   43874 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0708 20:29:29.493323   43874 command_runner.go:130] > [crio.tracing]
	I0708 20:29:29.493328   43874 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0708 20:29:29.493334   43874 command_runner.go:130] > # enable_tracing = false
	I0708 20:29:29.493340   43874 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0708 20:29:29.493346   43874 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0708 20:29:29.493353   43874 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0708 20:29:29.493359   43874 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0708 20:29:29.493364   43874 command_runner.go:130] > # CRI-O NRI configuration.
	I0708 20:29:29.493369   43874 command_runner.go:130] > [crio.nri]
	I0708 20:29:29.493374   43874 command_runner.go:130] > # Globally enable or disable NRI.
	I0708 20:29:29.493380   43874 command_runner.go:130] > # enable_nri = false
	I0708 20:29:29.493384   43874 command_runner.go:130] > # NRI socket to listen on.
	I0708 20:29:29.493391   43874 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0708 20:29:29.493395   43874 command_runner.go:130] > # NRI plugin directory to use.
	I0708 20:29:29.493400   43874 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0708 20:29:29.493405   43874 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0708 20:29:29.493411   43874 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0708 20:29:29.493417   43874 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0708 20:29:29.493423   43874 command_runner.go:130] > # nri_disable_connections = false
	I0708 20:29:29.493428   43874 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0708 20:29:29.493435   43874 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0708 20:29:29.493440   43874 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0708 20:29:29.493447   43874 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0708 20:29:29.493453   43874 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0708 20:29:29.493458   43874 command_runner.go:130] > [crio.stats]
	I0708 20:29:29.493464   43874 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0708 20:29:29.493471   43874 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0708 20:29:29.493475   43874 command_runner.go:130] > # stats_collection_period = 0
	I0708 20:29:29.495354   43874 cni.go:84] Creating CNI manager for ""
	I0708 20:29:29.495389   43874 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0708 20:29:29.495401   43874 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:29:29.495427   43874 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.44 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-957088 NodeName:multinode-957088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:29:29.495567   43874 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-957088"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:29:29.495626   43874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:29:29.507591   43874 command_runner.go:130] > kubeadm
	I0708 20:29:29.507612   43874 command_runner.go:130] > kubectl
	I0708 20:29:29.507616   43874 command_runner.go:130] > kubelet
	I0708 20:29:29.507635   43874 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:29:29.507690   43874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:29:29.517519   43874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0708 20:29:29.535779   43874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:29:29.553051   43874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0708 20:29:29.569824   43874 ssh_runner.go:195] Run: grep 192.168.39.44	control-plane.minikube.internal$ /etc/hosts
	I0708 20:29:29.573924   43874 command_runner.go:130] > 192.168.39.44	control-plane.minikube.internal
	I0708 20:29:29.574109   43874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:29:29.714266   43874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:29:29.729309   43874 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088 for IP: 192.168.39.44
	I0708 20:29:29.729331   43874 certs.go:194] generating shared ca certs ...
	I0708 20:29:29.729346   43874 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:29:29.729515   43874 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:29:29.729565   43874 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:29:29.729578   43874 certs.go:256] generating profile certs ...
	I0708 20:29:29.729688   43874 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/client.key
	I0708 20:29:29.729762   43874 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/apiserver.key.49267aaa
	I0708 20:29:29.729805   43874 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/proxy-client.key
	I0708 20:29:29.729817   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 20:29:29.729836   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 20:29:29.729852   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 20:29:29.729869   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 20:29:29.729894   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 20:29:29.729938   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 20:29:29.729963   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 20:29:29.729978   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 20:29:29.730042   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:29:29.730079   43874 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:29:29.730092   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:29:29.730127   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:29:29.730154   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:29:29.730188   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:29:29.730243   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:29:29.730280   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /usr/share/ca-certificates/131412.pem
	I0708 20:29:29.730299   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:29:29.730315   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem -> /usr/share/ca-certificates/13141.pem
	I0708 20:29:29.731168   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:29:29.757569   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:29:29.782884   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:29:29.808319   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:29:29.833142   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 20:29:29.857192   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:29:29.881095   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:29:29.906977   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 20:29:29.932865   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:29:29.959068   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:29:29.983782   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:29:30.010206   43874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:29:30.028193   43874 ssh_runner.go:195] Run: openssl version
	I0708 20:29:30.035067   43874 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0708 20:29:30.035149   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:29:30.046393   43874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:29:30.051313   43874 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:29:30.051351   43874 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:29:30.051396   43874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:29:30.057303   43874 command_runner.go:130] > 3ec20f2e
	I0708 20:29:30.057452   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:29:30.066972   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:29:30.077966   43874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:29:30.082675   43874 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:29:30.082709   43874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:29:30.082759   43874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:29:30.089011   43874 command_runner.go:130] > b5213941
	I0708 20:29:30.089111   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:29:30.098808   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:29:30.110523   43874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:29:30.115178   43874 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:29:30.115237   43874 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:29:30.115295   43874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:29:30.121010   43874 command_runner.go:130] > 51391683
	I0708 20:29:30.121202   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:29:30.130965   43874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:29:30.135669   43874 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:29:30.135696   43874 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0708 20:29:30.135704   43874 command_runner.go:130] > Device: 253,1	Inode: 5245461     Links: 1
	I0708 20:29:30.135713   43874 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0708 20:29:30.135724   43874 command_runner.go:130] > Access: 2024-07-08 20:23:20.376674544 +0000
	I0708 20:29:30.135735   43874 command_runner.go:130] > Modify: 2024-07-08 20:23:20.376674544 +0000
	I0708 20:29:30.135744   43874 command_runner.go:130] > Change: 2024-07-08 20:23:20.376674544 +0000
	I0708 20:29:30.135759   43874 command_runner.go:130] >  Birth: 2024-07-08 20:23:20.376674544 +0000
	I0708 20:29:30.135925   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:29:30.141861   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.142148   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:29:30.148147   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.148245   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:29:30.154162   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.154427   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:29:30.160352   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.160502   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:29:30.166283   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.166450   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:29:30.172270   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.172412   43874 kubeadm.go:391] StartCluster: {Name:multinode-957088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-957088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:29:30.172559   43874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:29:30.172640   43874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:29:30.210259   43874 command_runner.go:130] > baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c
	I0708 20:29:30.210292   43874 command_runner.go:130] > c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac
	I0708 20:29:30.210300   43874 command_runner.go:130] > eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4
	I0708 20:29:30.210309   43874 command_runner.go:130] > 5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c
	I0708 20:29:30.210317   43874 command_runner.go:130] > 8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375
	I0708 20:29:30.210326   43874 command_runner.go:130] > 7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507
	I0708 20:29:30.210336   43874 command_runner.go:130] > 3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db
	I0708 20:29:30.210345   43874 command_runner.go:130] > bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024
	I0708 20:29:30.210373   43874 cri.go:89] found id: "baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c"
	I0708 20:29:30.210381   43874 cri.go:89] found id: "c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac"
	I0708 20:29:30.210384   43874 cri.go:89] found id: "eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4"
	I0708 20:29:30.210387   43874 cri.go:89] found id: "5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c"
	I0708 20:29:30.210390   43874 cri.go:89] found id: "8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375"
	I0708 20:29:30.210393   43874 cri.go:89] found id: "7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507"
	I0708 20:29:30.210396   43874 cri.go:89] found id: "3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db"
	I0708 20:29:30.210399   43874 cri.go:89] found id: "bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024"
	I0708 20:29:30.210401   43874 cri.go:89] found id: ""
	I0708 20:29:30.210440   43874 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.054276670Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720470660054246196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e23c1805-94b4-44b0-ba24-6aef6c560a9f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.055031106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ab8326d-16c7-4ee9-8a5b-21ab882b37a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.055103481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ab8326d-16c7-4ee9-8a5b-21ab882b37a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.055464254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ea54c73e0f3726c901e14075c8f0809e8b173d25d9c91ce9d4ed2ff869e6062,PodSandboxId:6eb67e95826c021b12fa109d69ab787a87dd8a5871d50576c24982eaf6b0b807,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720470610178566973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76806f8a013ba1f2a9c54c275f108e7e849ffecce0b458befb76019314ca14d4,PodSandboxId:3af269b4aabae5c79730c4b4dbbbabdcf48d9f1ebba9c2add8e02e19219818ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720470576688646543,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54174b10cb5183999bad08287b0a89acebbfac005a775ceb383a4c23ce3412ac,PodSandboxId:e8e3fa51b35ad30cc477a592d8f09444768ccb4f87ad54e76a1422a60e8ae36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720470576691111320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546831c23c80e430aaab6e2a857e677f729f9290a275710847b09a7e355390e2,PodSandboxId:d733ea97b0533e3b2e08e9b2a913ee764189aafa0e159f7445c83ec05acb852d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720470576419730602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-d84244caf4a9,},Annotations:map[string]
string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5ecc0492b2c2f6027a891fdd6f93fdf7ef1cdded7ba8958191fdaeb2796517,PodSandboxId:c9b6d5d65f23ea51f1eb7acf065a1a27a735adfd72daef063db3832f9aa1942f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720470576435358325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.ku
bernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b516f0a686a5925ebc0bd4ea92a8b6383cf03e4469d7478996644bdea1e54bb,PodSandboxId:07a085bb954d4cbb5a5d1f6aab4fc0055cc0e42f8ca06aa7ae168fd6b3ae6f40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720470572669136148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5da15967827256185eb2546419913d851533e4e51e34d1f698de18415004dda,PodSandboxId:0ccb3568f0163fae07ca185ea0b7c8845d5822bff693b7b83af8c810ac2979bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720470572616418019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2951ca64535e2caa6003d7f7a75347625c078667561b7d1e59372f1df3eba911,PodSandboxId:6d36bac90520e3b1e53aaf308dcf46f20a2162e1c17121cd653c18cf4f0b7d6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720470572569974658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ac3f4ee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d1d879b7776b5cfc71dcaee948a028e4a0628fbb3c661104ea24a5e1de9a58,PodSandboxId:18af6c77652eaf852d32c08b1f452ebcb57d868aed733e97287c3c80b91a45a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720470572525342990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3a3b62e86e9a99cb9815651f876e76dc01fece2f3da4a883d24618d81d3df8,PodSandboxId:45daa79761639627232cb3faa9c11617d117aa5dc666dc134c89d04f8b4b77d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720470268406216186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c,PodSandboxId:d198d3e471da431c3023870c9d69519f87234f13cb13c3665bec4f8611ea0f09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720470225282207533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac,PodSandboxId:193c64f1ecc6a73d51c1762d70d307d30e2b434826143db013f1d44dddaca78e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720470224861704699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4,PodSandboxId:28bf5d2a49ccf088e781b2e0279eadf5d7b010921a8be7b053994a391c6c2e9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720470223366468421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c,PodSandboxId:d93dd4e73641f5652616875d582d89397e9f6498ab6011daf92d7734aca83bde,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720470223208155438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-
d84244caf4a9,},Annotations:map[string]string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507,PodSandboxId:5a4433da8c657a6516644819f9fb27a5b949cbd2a194ca36cae94e87a58589bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720470203714068571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{
io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375,PodSandboxId:0fc745b8ee3be213a585f87aa31799a7a86a5df9b91557bf723514cbac0709ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720470203773386860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db,PodSandboxId:80bae309ed5a22feb2eac1649026ca650831da62c3c1a44d119edb2b7ce40bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720470203669705068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024,PodSandboxId:d02b3fe8a7e16c5369682d53bb8df678bc4f28ed1bb7d846398c856dd394c579,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720470203639895629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: ac3f4ee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ab8326d-16c7-4ee9-8a5b-21ab882b37a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.102297753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c76d18be-8004-4b9e-a6db-ada0c91e3709 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.102390600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c76d18be-8004-4b9e-a6db-ada0c91e3709 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.110952737Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d63b70d-febd-448a-8e44-1bf704dd50f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.112073623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720470660112038006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d63b70d-febd-448a-8e44-1bf704dd50f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.113092331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6b3e021-4661-49fc-8f8f-4ef5fe7c4682 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.113265730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6b3e021-4661-49fc-8f8f-4ef5fe7c4682 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.113846986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ea54c73e0f3726c901e14075c8f0809e8b173d25d9c91ce9d4ed2ff869e6062,PodSandboxId:6eb67e95826c021b12fa109d69ab787a87dd8a5871d50576c24982eaf6b0b807,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720470610178566973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76806f8a013ba1f2a9c54c275f108e7e849ffecce0b458befb76019314ca14d4,PodSandboxId:3af269b4aabae5c79730c4b4dbbbabdcf48d9f1ebba9c2add8e02e19219818ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720470576688646543,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54174b10cb5183999bad08287b0a89acebbfac005a775ceb383a4c23ce3412ac,PodSandboxId:e8e3fa51b35ad30cc477a592d8f09444768ccb4f87ad54e76a1422a60e8ae36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720470576691111320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546831c23c80e430aaab6e2a857e677f729f9290a275710847b09a7e355390e2,PodSandboxId:d733ea97b0533e3b2e08e9b2a913ee764189aafa0e159f7445c83ec05acb852d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720470576419730602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-d84244caf4a9,},Annotations:map[string]
string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5ecc0492b2c2f6027a891fdd6f93fdf7ef1cdded7ba8958191fdaeb2796517,PodSandboxId:c9b6d5d65f23ea51f1eb7acf065a1a27a735adfd72daef063db3832f9aa1942f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720470576435358325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.ku
bernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b516f0a686a5925ebc0bd4ea92a8b6383cf03e4469d7478996644bdea1e54bb,PodSandboxId:07a085bb954d4cbb5a5d1f6aab4fc0055cc0e42f8ca06aa7ae168fd6b3ae6f40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720470572669136148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5da15967827256185eb2546419913d851533e4e51e34d1f698de18415004dda,PodSandboxId:0ccb3568f0163fae07ca185ea0b7c8845d5822bff693b7b83af8c810ac2979bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720470572616418019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2951ca64535e2caa6003d7f7a75347625c078667561b7d1e59372f1df3eba911,PodSandboxId:6d36bac90520e3b1e53aaf308dcf46f20a2162e1c17121cd653c18cf4f0b7d6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720470572569974658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ac3f4ee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d1d879b7776b5cfc71dcaee948a028e4a0628fbb3c661104ea24a5e1de9a58,PodSandboxId:18af6c77652eaf852d32c08b1f452ebcb57d868aed733e97287c3c80b91a45a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720470572525342990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3a3b62e86e9a99cb9815651f876e76dc01fece2f3da4a883d24618d81d3df8,PodSandboxId:45daa79761639627232cb3faa9c11617d117aa5dc666dc134c89d04f8b4b77d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720470268406216186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c,PodSandboxId:d198d3e471da431c3023870c9d69519f87234f13cb13c3665bec4f8611ea0f09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720470225282207533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac,PodSandboxId:193c64f1ecc6a73d51c1762d70d307d30e2b434826143db013f1d44dddaca78e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720470224861704699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4,PodSandboxId:28bf5d2a49ccf088e781b2e0279eadf5d7b010921a8be7b053994a391c6c2e9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720470223366468421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c,PodSandboxId:d93dd4e73641f5652616875d582d89397e9f6498ab6011daf92d7734aca83bde,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720470223208155438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-
d84244caf4a9,},Annotations:map[string]string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507,PodSandboxId:5a4433da8c657a6516644819f9fb27a5b949cbd2a194ca36cae94e87a58589bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720470203714068571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{
io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375,PodSandboxId:0fc745b8ee3be213a585f87aa31799a7a86a5df9b91557bf723514cbac0709ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720470203773386860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db,PodSandboxId:80bae309ed5a22feb2eac1649026ca650831da62c3c1a44d119edb2b7ce40bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720470203669705068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024,PodSandboxId:d02b3fe8a7e16c5369682d53bb8df678bc4f28ed1bb7d846398c856dd394c579,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720470203639895629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: ac3f4ee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6b3e021-4661-49fc-8f8f-4ef5fe7c4682 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.163424741Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7476fa19-dacc-4631-ab7c-d57c4911f2f5 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.163505895Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7476fa19-dacc-4631-ab7c-d57c4911f2f5 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.166254451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3955d079-13e7-4b2c-9030-7de392f707b6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.166797262Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720470660166765553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3955d079-13e7-4b2c-9030-7de392f707b6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.167423448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97dfa8ae-b75f-458a-9a60-e4109bc77ec4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.167483823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97dfa8ae-b75f-458a-9a60-e4109bc77ec4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.167892843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ea54c73e0f3726c901e14075c8f0809e8b173d25d9c91ce9d4ed2ff869e6062,PodSandboxId:6eb67e95826c021b12fa109d69ab787a87dd8a5871d50576c24982eaf6b0b807,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720470610178566973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76806f8a013ba1f2a9c54c275f108e7e849ffecce0b458befb76019314ca14d4,PodSandboxId:3af269b4aabae5c79730c4b4dbbbabdcf48d9f1ebba9c2add8e02e19219818ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720470576688646543,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54174b10cb5183999bad08287b0a89acebbfac005a775ceb383a4c23ce3412ac,PodSandboxId:e8e3fa51b35ad30cc477a592d8f09444768ccb4f87ad54e76a1422a60e8ae36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720470576691111320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546831c23c80e430aaab6e2a857e677f729f9290a275710847b09a7e355390e2,PodSandboxId:d733ea97b0533e3b2e08e9b2a913ee764189aafa0e159f7445c83ec05acb852d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720470576419730602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-d84244caf4a9,},Annotations:map[string]
string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5ecc0492b2c2f6027a891fdd6f93fdf7ef1cdded7ba8958191fdaeb2796517,PodSandboxId:c9b6d5d65f23ea51f1eb7acf065a1a27a735adfd72daef063db3832f9aa1942f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720470576435358325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.ku
bernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b516f0a686a5925ebc0bd4ea92a8b6383cf03e4469d7478996644bdea1e54bb,PodSandboxId:07a085bb954d4cbb5a5d1f6aab4fc0055cc0e42f8ca06aa7ae168fd6b3ae6f40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720470572669136148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5da15967827256185eb2546419913d851533e4e51e34d1f698de18415004dda,PodSandboxId:0ccb3568f0163fae07ca185ea0b7c8845d5822bff693b7b83af8c810ac2979bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720470572616418019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2951ca64535e2caa6003d7f7a75347625c078667561b7d1e59372f1df3eba911,PodSandboxId:6d36bac90520e3b1e53aaf308dcf46f20a2162e1c17121cd653c18cf4f0b7d6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720470572569974658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ac3f4ee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d1d879b7776b5cfc71dcaee948a028e4a0628fbb3c661104ea24a5e1de9a58,PodSandboxId:18af6c77652eaf852d32c08b1f452ebcb57d868aed733e97287c3c80b91a45a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720470572525342990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3a3b62e86e9a99cb9815651f876e76dc01fece2f3da4a883d24618d81d3df8,PodSandboxId:45daa79761639627232cb3faa9c11617d117aa5dc666dc134c89d04f8b4b77d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720470268406216186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c,PodSandboxId:d198d3e471da431c3023870c9d69519f87234f13cb13c3665bec4f8611ea0f09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720470225282207533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac,PodSandboxId:193c64f1ecc6a73d51c1762d70d307d30e2b434826143db013f1d44dddaca78e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720470224861704699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4,PodSandboxId:28bf5d2a49ccf088e781b2e0279eadf5d7b010921a8be7b053994a391c6c2e9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720470223366468421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c,PodSandboxId:d93dd4e73641f5652616875d582d89397e9f6498ab6011daf92d7734aca83bde,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720470223208155438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-
d84244caf4a9,},Annotations:map[string]string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507,PodSandboxId:5a4433da8c657a6516644819f9fb27a5b949cbd2a194ca36cae94e87a58589bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720470203714068571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{
io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375,PodSandboxId:0fc745b8ee3be213a585f87aa31799a7a86a5df9b91557bf723514cbac0709ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720470203773386860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db,PodSandboxId:80bae309ed5a22feb2eac1649026ca650831da62c3c1a44d119edb2b7ce40bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720470203669705068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024,PodSandboxId:d02b3fe8a7e16c5369682d53bb8df678bc4f28ed1bb7d846398c856dd394c579,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720470203639895629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: ac3f4ee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97dfa8ae-b75f-458a-9a60-e4109bc77ec4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.213635176Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41182f7a-d6eb-4efc-a9a1-10ccbcc63824 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.213735392Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41182f7a-d6eb-4efc-a9a1-10ccbcc63824 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.214952261Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b674a97-7970-4483-94df-acc82be0699b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.215374284Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720470660215352102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b674a97-7970-4483-94df-acc82be0699b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.216044785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28f0e576-d59d-47bd-a45c-294f9479d558 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.216122727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28f0e576-d59d-47bd-a45c-294f9479d558 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:31:00 multinode-957088 crio[2827]: time="2024-07-08 20:31:00.216487707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ea54c73e0f3726c901e14075c8f0809e8b173d25d9c91ce9d4ed2ff869e6062,PodSandboxId:6eb67e95826c021b12fa109d69ab787a87dd8a5871d50576c24982eaf6b0b807,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720470610178566973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76806f8a013ba1f2a9c54c275f108e7e849ffecce0b458befb76019314ca14d4,PodSandboxId:3af269b4aabae5c79730c4b4dbbbabdcf48d9f1ebba9c2add8e02e19219818ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720470576688646543,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54174b10cb5183999bad08287b0a89acebbfac005a775ceb383a4c23ce3412ac,PodSandboxId:e8e3fa51b35ad30cc477a592d8f09444768ccb4f87ad54e76a1422a60e8ae36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720470576691111320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546831c23c80e430aaab6e2a857e677f729f9290a275710847b09a7e355390e2,PodSandboxId:d733ea97b0533e3b2e08e9b2a913ee764189aafa0e159f7445c83ec05acb852d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720470576419730602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-d84244caf4a9,},Annotations:map[string]
string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5ecc0492b2c2f6027a891fdd6f93fdf7ef1cdded7ba8958191fdaeb2796517,PodSandboxId:c9b6d5d65f23ea51f1eb7acf065a1a27a735adfd72daef063db3832f9aa1942f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720470576435358325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.ku
bernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b516f0a686a5925ebc0bd4ea92a8b6383cf03e4469d7478996644bdea1e54bb,PodSandboxId:07a085bb954d4cbb5a5d1f6aab4fc0055cc0e42f8ca06aa7ae168fd6b3ae6f40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720470572669136148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5da15967827256185eb2546419913d851533e4e51e34d1f698de18415004dda,PodSandboxId:0ccb3568f0163fae07ca185ea0b7c8845d5822bff693b7b83af8c810ac2979bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720470572616418019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2951ca64535e2caa6003d7f7a75347625c078667561b7d1e59372f1df3eba911,PodSandboxId:6d36bac90520e3b1e53aaf308dcf46f20a2162e1c17121cd653c18cf4f0b7d6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720470572569974658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ac3f4ee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d1d879b7776b5cfc71dcaee948a028e4a0628fbb3c661104ea24a5e1de9a58,PodSandboxId:18af6c77652eaf852d32c08b1f452ebcb57d868aed733e97287c3c80b91a45a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720470572525342990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3a3b62e86e9a99cb9815651f876e76dc01fece2f3da4a883d24618d81d3df8,PodSandboxId:45daa79761639627232cb3faa9c11617d117aa5dc666dc134c89d04f8b4b77d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720470268406216186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c,PodSandboxId:d198d3e471da431c3023870c9d69519f87234f13cb13c3665bec4f8611ea0f09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720470225282207533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac,PodSandboxId:193c64f1ecc6a73d51c1762d70d307d30e2b434826143db013f1d44dddaca78e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720470224861704699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4,PodSandboxId:28bf5d2a49ccf088e781b2e0279eadf5d7b010921a8be7b053994a391c6c2e9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720470223366468421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c,PodSandboxId:d93dd4e73641f5652616875d582d89397e9f6498ab6011daf92d7734aca83bde,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720470223208155438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-
d84244caf4a9,},Annotations:map[string]string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507,PodSandboxId:5a4433da8c657a6516644819f9fb27a5b949cbd2a194ca36cae94e87a58589bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720470203714068571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{
io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375,PodSandboxId:0fc745b8ee3be213a585f87aa31799a7a86a5df9b91557bf723514cbac0709ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720470203773386860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db,PodSandboxId:80bae309ed5a22feb2eac1649026ca650831da62c3c1a44d119edb2b7ce40bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720470203669705068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024,PodSandboxId:d02b3fe8a7e16c5369682d53bb8df678bc4f28ed1bb7d846398c856dd394c579,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720470203639895629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: ac3f4ee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28f0e576-d59d-47bd-a45c-294f9479d558 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7ea54c73e0f37       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      50 seconds ago       Running             busybox                   1                   6eb67e95826c0       busybox-fc5497c4f-fqkrd
	54174b10cb518       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   e8e3fa51b35ad       coredns-7db6d8ff4d-v92sb
	76806f8a013ba       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   3af269b4aabae       kindnet-9t7dr
	1e5ecc0492b2c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   c9b6d5d65f23e       storage-provisioner
	546831c23c80e       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      About a minute ago   Running             kube-proxy                1                   d733ea97b0533       kube-proxy-gfhs4
	5b516f0a686a5       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      About a minute ago   Running             kube-scheduler            1                   07a085bb954d4       kube-scheduler-multinode-957088
	e5da159678272       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   1                   0ccb3568f0163       kube-controller-manager-multinode-957088
	2951ca64535e2       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            1                   6d36bac90520e       kube-apiserver-multinode-957088
	03d1d879b7776       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   18af6c77652ea       etcd-multinode-957088
	fc3a3b62e86e9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   45daa79761639       busybox-fc5497c4f-fqkrd
	baefad39c2fab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   d198d3e471da4       storage-provisioner
	c830a371893b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   193c64f1ecc6a       coredns-7db6d8ff4d-v92sb
	eb391894abfdb       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      7 minutes ago        Exited              kindnet-cni               0                   28bf5d2a49ccf       kindnet-9t7dr
	5e5c1809cf82f       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      7 minutes ago        Exited              kube-proxy                0                   d93dd4e73641f       kube-proxy-gfhs4
	8494ebc50dfd8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      7 minutes ago        Exited              kube-scheduler            0                   0fc745b8ee3be       kube-scheduler-multinode-957088
	7316863a44cdb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   5a4433da8c657       etcd-multinode-957088
	3a84ba8bcb826       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      7 minutes ago        Exited              kube-controller-manager   0                   80bae309ed5a2       kube-controller-manager-multinode-957088
	bcae37a9f4a92       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      7 minutes ago        Exited              kube-apiserver            0                   d02b3fe8a7e16       kube-apiserver-multinode-957088
	
	
	==> coredns [54174b10cb5183999bad08287b0a89acebbfac005a775ceb383a4c23ce3412ac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48696 - 57141 "HINFO IN 2699131153796909940.5949095140639304341. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010806425s
	
	
	==> coredns [c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac] <==
	[INFO] 10.244.1.2:39051 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001987224s
	[INFO] 10.244.1.2:34623 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123358s
	[INFO] 10.244.1.2:39567 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091793s
	[INFO] 10.244.1.2:55230 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001520414s
	[INFO] 10.244.1.2:38977 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164138s
	[INFO] 10.244.1.2:53511 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135828s
	[INFO] 10.244.1.2:41184 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112672s
	[INFO] 10.244.0.3:36500 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101415s
	[INFO] 10.244.0.3:46921 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087103s
	[INFO] 10.244.0.3:34413 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110282s
	[INFO] 10.244.0.3:59170 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056095s
	[INFO] 10.244.1.2:48146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135599s
	[INFO] 10.244.1.2:54218 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087076s
	[INFO] 10.244.1.2:43963 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097581s
	[INFO] 10.244.1.2:60755 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069718s
	[INFO] 10.244.0.3:52977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122664s
	[INFO] 10.244.0.3:38629 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104144s
	[INFO] 10.244.0.3:43014 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182754s
	[INFO] 10.244.0.3:57813 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061916s
	[INFO] 10.244.1.2:34355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246777s
	[INFO] 10.244.1.2:47330 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117892s
	[INFO] 10.244.1.2:52551 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000160024s
	[INFO] 10.244.1.2:60704 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093704s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-957088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-957088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=multinode-957088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T20_23_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 20:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-957088
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:30:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:29:35 +0000   Mon, 08 Jul 2024 20:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:29:35 +0000   Mon, 08 Jul 2024 20:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:29:35 +0000   Mon, 08 Jul 2024 20:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:29:35 +0000   Mon, 08 Jul 2024 20:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    multinode-957088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58385afd92734749810a984c4698432d
	  System UUID:                58385afd-9273-4749-810a-984c4698432d
	  Boot ID:                    423b33e5-abaf-4580-b287-154ffa19f04b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fqkrd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 coredns-7db6d8ff4d-v92sb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m18s
	  kube-system                 etcd-multinode-957088                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m31s
	  kube-system                 kindnet-9t7dr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m18s
	  kube-system                 kube-apiserver-multinode-957088             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-controller-manager-multinode-957088    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-proxy-gfhs4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 kube-scheduler-multinode-957088             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m16s              kube-proxy       
	  Normal  Starting                 83s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m32s              kubelet          Node multinode-957088 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m32s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m32s              kubelet          Node multinode-957088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s              kubelet          Node multinode-957088 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m32s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m19s              node-controller  Node multinode-957088 event: Registered Node multinode-957088 in Controller
	  Normal  NodeReady                7m16s              kubelet          Node multinode-957088 status is now: NodeReady
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s (x8 over 89s)  kubelet          Node multinode-957088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 89s)  kubelet          Node multinode-957088 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 89s)  kubelet          Node multinode-957088 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           72s                node-controller  Node multinode-957088 event: Registered Node multinode-957088 in Controller
	
	
	Name:               multinode-957088-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-957088-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=multinode-957088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T20_30_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 20:30:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-957088-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:30:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:30:47 +0000   Mon, 08 Jul 2024 20:30:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:30:47 +0000   Mon, 08 Jul 2024 20:30:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:30:47 +0000   Mon, 08 Jul 2024 20:30:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:30:47 +0000   Mon, 08 Jul 2024 20:30:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    multinode-957088-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 88b2e56c785e4d59b587b1a78b1fe471
	  System UUID:                88b2e56c-785e-4d59-b587-b1a78b1fe471
	  Boot ID:                    dd117f1f-1167-4125-9576-23734e9aaf73
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jmmbp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-hlbwx              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m44s
	  kube-system                 kube-proxy-pwshr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m38s                  kube-proxy  
	  Normal  Starting                 39s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m44s (x2 over 6m44s)  kubelet     Node multinode-957088-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s (x2 over 6m44s)  kubelet     Node multinode-957088-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m44s (x2 over 6m44s)  kubelet     Node multinode-957088-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m36s                  kubelet     Node multinode-957088-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  44s (x2 over 44s)      kubelet     Node multinode-957088-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x2 over 44s)      kubelet     Node multinode-957088-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x2 over 44s)      kubelet     Node multinode-957088-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  44s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                36s                    kubelet     Node multinode-957088-m02 status is now: NodeReady
	
	
	Name:               multinode-957088-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-957088-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=multinode-957088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T20_30_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 20:30:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-957088-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:30:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:30:57 +0000   Mon, 08 Jul 2024 20:30:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:30:57 +0000   Mon, 08 Jul 2024 20:30:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:30:57 +0000   Mon, 08 Jul 2024 20:30:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:30:57 +0000   Mon, 08 Jul 2024 20:30:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    multinode-957088-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b6a3414897840d2b285361e69fe15d6
	  System UUID:                3b6a3414-8978-40d2-b285-361e69fe15d6
	  Boot ID:                    ebdaab17-72d0-4024-a0fd-bbf96e60f3cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-znnpz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-proxy-9qh7b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m52s                  kube-proxy  
	  Normal  Starting                 6s                     kube-proxy  
	  Normal  Starting                 5m15s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m58s (x2 over 5m58s)  kubelet     Node multinode-957088-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x2 over 5m58s)  kubelet     Node multinode-957088-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x2 over 5m58s)  kubelet     Node multinode-957088-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m50s                  kubelet     Node multinode-957088-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m20s (x2 over 5m20s)  kubelet     Node multinode-957088-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m20s (x2 over 5m20s)  kubelet     Node multinode-957088-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m20s (x2 over 5m20s)  kubelet     Node multinode-957088-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m13s                  kubelet     Node multinode-957088-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  17s (x2 over 17s)      kubelet     Node multinode-957088-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x2 over 17s)      kubelet     Node multinode-957088-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x2 over 17s)      kubelet     Node multinode-957088-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-957088-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056614] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.170082] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.146118] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.303307] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.316196] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.059303] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.553927] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.445612] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.617609] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.079379] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.419328] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.192172] systemd-fstab-generator[1468]: Ignoring "noauto" option for root device
	[Jul 8 20:24] kauditd_printk_skb: 84 callbacks suppressed
	[Jul 8 20:29] systemd-fstab-generator[2741]: Ignoring "noauto" option for root device
	[  +0.145148] systemd-fstab-generator[2753]: Ignoring "noauto" option for root device
	[  +0.174334] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +0.137407] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.295897] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +4.283876] systemd-fstab-generator[2910]: Ignoring "noauto" option for root device
	[  +0.088036] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.933379] systemd-fstab-generator[3035]: Ignoring "noauto" option for root device
	[  +4.679658] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.348404] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.515421] systemd-fstab-generator[3859]: Ignoring "noauto" option for root device
	[Jul 8 20:30] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [03d1d879b7776b5cfc71dcaee948a028e4a0628fbb3c661104ea24a5e1de9a58] <==
	{"level":"info","ts":"2024-07-08T20:29:32.997829Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T20:29:32.997857Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T20:29:33.008488Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T20:29:33.009199Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"efcba07991c99763","initial-advertise-peer-urls":["https://192.168.39.44:2380"],"listen-peer-urls":["https://192.168.39.44:2380"],"advertise-client-urls":["https://192.168.39.44:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.44:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T20:29:33.011861Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T20:29:33.011935Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.44:2380"}
	{"level":"info","ts":"2024-07-08T20:29:33.015719Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.44:2380"}
	{"level":"info","ts":"2024-07-08T20:29:34.138044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T20:29:34.138162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T20:29:34.138233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 received MsgPreVoteResp from efcba07991c99763 at term 2"}
	{"level":"info","ts":"2024-07-08T20:29:34.138271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 became candidate at term 3"}
	{"level":"info","ts":"2024-07-08T20:29:34.138295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 received MsgVoteResp from efcba07991c99763 at term 3"}
	{"level":"info","ts":"2024-07-08T20:29:34.138323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 became leader at term 3"}
	{"level":"info","ts":"2024-07-08T20:29:34.138365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: efcba07991c99763 elected leader efcba07991c99763 at term 3"}
	{"level":"info","ts":"2024-07-08T20:29:34.144107Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"efcba07991c99763","local-member-attributes":"{Name:multinode-957088 ClientURLs:[https://192.168.39.44:2379]}","request-path":"/0/members/efcba07991c99763/attributes","cluster-id":"aad7d4b1c0e48cd8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T20:29:34.144412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:29:34.14447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T20:29:34.14452Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T20:29:34.144639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:29:34.146768Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T20:29:34.146836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.44:2379"}
	{"level":"info","ts":"2024-07-08T20:30:53.99407Z","caller":"traceutil/trace.go:171","msg":"trace[633866623] linearizableReadLoop","detail":"{readStateIndex:1190; appliedIndex:1189; }","duration":"129.22436ms","start":"2024-07-08T20:30:53.864812Z","end":"2024-07-08T20:30:53.994037Z","steps":["trace[633866623] 'read index received'  (duration: 128.236067ms)","trace[633866623] 'applied index is now lower than readState.Index'  (duration: 987.473µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T20:30:53.994425Z","caller":"traceutil/trace.go:171","msg":"trace[1613955756] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"168.075541ms","start":"2024-07-08T20:30:53.826333Z","end":"2024-07-08T20:30:53.994409Z","steps":["trace[1613955756] 'process raft request'  (duration: 166.806617ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T20:30:53.994718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.836295ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:3 size:13043"}
	{"level":"info","ts":"2024-07-08T20:30:53.995353Z","caller":"traceutil/trace.go:171","msg":"trace[290880143] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:3; response_revision:1087; }","duration":"130.531803ms","start":"2024-07-08T20:30:53.864807Z","end":"2024-07-08T20:30:53.995339Z","steps":["trace[290880143] 'agreement among raft nodes before linearized reading'  (duration: 129.65892ms)"],"step_count":1}
	
	
	==> etcd [7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507] <==
	{"level":"info","ts":"2024-07-08T20:23:25.04599Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T20:23:25.049291Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T20:23:25.05328Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.44:2379"}
	{"level":"info","ts":"2024-07-08T20:24:16.750874Z","caller":"traceutil/trace.go:171","msg":"trace[17002875] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"110.917883ms","start":"2024-07-08T20:24:16.639926Z","end":"2024-07-08T20:24:16.750843Z","steps":["trace[17002875] 'process raft request'  (duration: 98.954758ms)","trace[17002875] 'compare'  (duration: 11.531297ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T20:24:16.750931Z","caller":"traceutil/trace.go:171","msg":"trace[2089420998] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"107.977547ms","start":"2024-07-08T20:24:16.642943Z","end":"2024-07-08T20:24:16.75092Z","steps":["trace[2089420998] 'process raft request'  (duration: 107.591659ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T20:25:02.626002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.136435ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10908721687817023209 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-957088-m03.17e0569fe29a73be\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-957088-m03.17e0569fe29a73be\" value_size:646 lease:1685349650962247399 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-08T20:25:02.626415Z","caller":"traceutil/trace.go:171","msg":"trace[2045240650] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"239.454997ms","start":"2024-07-08T20:25:02.386936Z","end":"2024-07-08T20:25:02.626391Z","steps":["trace[2045240650] 'process raft request'  (duration: 83.177203ms)","trace[2045240650] 'compare'  (duration: 154.876049ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T20:25:02.62658Z","caller":"traceutil/trace.go:171","msg":"trace[2032193689] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"181.373929ms","start":"2024-07-08T20:25:02.445194Z","end":"2024-07-08T20:25:02.626568Z","steps":["trace[2032193689] 'process raft request'  (duration: 181.036684ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T20:25:04.636422Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.927755ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10908721687817023262 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-9qh7b\" mod_revision:578 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-9qh7b\" value_size:4591 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-9qh7b\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-08T20:25:04.636529Z","caller":"traceutil/trace.go:171","msg":"trace[2027595840] linearizableReadLoop","detail":"{readStateIndex:626; appliedIndex:625; }","duration":"182.570685ms","start":"2024-07-08T20:25:04.453945Z","end":"2024-07-08T20:25:04.636515Z","steps":["trace[2027595840] 'read index received'  (duration: 52.411947ms)","trace[2027595840] 'applied index is now lower than readState.Index'  (duration: 130.157644ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T20:25:04.636819Z","caller":"traceutil/trace.go:171","msg":"trace[1842556120] transaction","detail":"{read_only:false; response_revision:595; number_of_response:1; }","duration":"217.490941ms","start":"2024-07-08T20:25:04.419314Z","end":"2024-07-08T20:25:04.636805Z","steps":["trace[1842556120] 'process raft request'  (duration: 86.963846ms)","trace[1842556120] 'compare'  (duration: 129.816738ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-08T20:25:04.637003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.047522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2024-07-08T20:25:04.637049Z","caller":"traceutil/trace.go:171","msg":"trace[631158454] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:595; }","duration":"183.169661ms","start":"2024-07-08T20:25:04.453871Z","end":"2024-07-08T20:25:04.637041Z","steps":["trace[631158454] 'agreement among raft nodes before linearized reading'  (duration: 183.09019ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T20:25:04.637225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.212544ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-07-08T20:25:04.637271Z","caller":"traceutil/trace.go:171","msg":"trace[1083375330] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:595; }","duration":"183.282905ms","start":"2024-07-08T20:25:04.453975Z","end":"2024-07-08T20:25:04.637258Z","steps":["trace[1083375330] 'agreement among raft nodes before linearized reading'  (duration: 183.223004ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T20:27:53.332763Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-08T20:27:53.33288Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-957088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.44:2380"],"advertise-client-urls":["https://192.168.39.44:2379"]}
	{"level":"warn","ts":"2024-07-08T20:27:53.332985Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T20:27:53.333084Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T20:27:53.377468Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.44:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T20:27:53.377557Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.44:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-08T20:27:53.377716Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"efcba07991c99763","current-leader-member-id":"efcba07991c99763"}
	{"level":"info","ts":"2024-07-08T20:27:53.381534Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.44:2380"}
	{"level":"info","ts":"2024-07-08T20:27:53.381759Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.44:2380"}
	{"level":"info","ts":"2024-07-08T20:27:53.381796Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-957088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.44:2380"],"advertise-client-urls":["https://192.168.39.44:2379"]}
	
	
	==> kernel <==
	 20:31:00 up 8 min,  0 users,  load average: 0.86, 0.35, 0.16
	Linux multinode-957088 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [76806f8a013ba1f2a9c54c275f108e7e849ffecce0b458befb76019314ca14d4] <==
	I0708 20:30:17.640709       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:30:27.646660       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:30:27.646701       1 main.go:227] handling current node
	I0708 20:30:27.646713       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:30:27.646717       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:30:27.646837       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:30:27.646860       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:30:37.680245       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:30:37.680361       1 main.go:227] handling current node
	I0708 20:30:37.680390       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:30:37.680407       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:30:37.680537       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:30:37.680557       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:30:47.691001       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:30:47.691086       1 main.go:227] handling current node
	I0708 20:30:47.691112       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:30:47.691129       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:30:47.691238       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:30:47.691256       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.2.0/24] 
	I0708 20:30:57.695542       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:30:57.695756       1 main.go:227] handling current node
	I0708 20:30:57.695789       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:30:57.695809       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:30:57.695958       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:30:57.695985       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4] <==
	I0708 20:27:04.378582       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:27:14.383717       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:27:14.383778       1 main.go:227] handling current node
	I0708 20:27:14.383800       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:27:14.383805       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:27:14.383931       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:27:14.383952       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:27:24.392851       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:27:24.392887       1 main.go:227] handling current node
	I0708 20:27:24.392898       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:27:24.392903       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:27:24.393002       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:27:24.393023       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:27:34.398063       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:27:34.398109       1 main.go:227] handling current node
	I0708 20:27:34.398120       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:27:34.398125       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:27:34.398232       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:27:34.398237       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:27:44.479879       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:27:44.479935       1 main.go:227] handling current node
	I0708 20:27:44.479951       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:27:44.479956       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:27:44.480110       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:27:44.480135       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2951ca64535e2caa6003d7f7a75347625c078667561b7d1e59372f1df3eba911] <==
	I0708 20:29:35.472556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0708 20:29:35.474021       1 aggregator.go:165] initial CRD sync complete...
	I0708 20:29:35.474062       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 20:29:35.474070       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 20:29:35.510135       1 shared_informer.go:320] Caches are synced for configmaps
	I0708 20:29:35.510224       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0708 20:29:35.519226       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 20:29:35.519269       1 policy_source.go:224] refreshing policies
	E0708 20:29:35.543461       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0708 20:29:35.573187       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 20:29:35.575472       1 cache.go:39] Caches are synced for autoregister controller
	I0708 20:29:35.608811       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0708 20:29:35.611085       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0708 20:29:35.611320       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0708 20:29:35.611451       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0708 20:29:35.612577       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0708 20:29:35.617683       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0708 20:29:36.419165       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0708 20:29:37.855818       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 20:29:37.974175       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 20:29:37.987901       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 20:29:38.067400       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 20:29:38.074903       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0708 20:29:48.566228       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 20:29:48.622849       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024] <==
	E0708 20:23:28.769907       1 timeout.go:142] post-timeout activity - time-elapsed: 2.893663ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0708 20:23:28.998144       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 20:23:29.044056       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0708 20:23:29.060886       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 20:23:42.424405       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0708 20:23:42.503083       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0708 20:24:29.655101       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35086: use of closed network connection
	E0708 20:24:29.833129       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35102: use of closed network connection
	E0708 20:24:30.019120       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35116: use of closed network connection
	E0708 20:24:30.209336       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35128: use of closed network connection
	E0708 20:24:30.380777       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35158: use of closed network connection
	E0708 20:24:30.545966       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35174: use of closed network connection
	E0708 20:24:30.826903       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35214: use of closed network connection
	E0708 20:24:31.024241       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35232: use of closed network connection
	E0708 20:24:31.203948       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35256: use of closed network connection
	E0708 20:24:31.377520       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35280: use of closed network connection
	I0708 20:27:53.323755       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0708 20:27:53.341228       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.341349       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.341507       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.341915       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.341995       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.342087       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.342148       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.342575       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db] <==
	I0708 20:24:16.827802       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-957088-m02"
	I0708 20:24:16.847904       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-957088-m02" podCIDRs=["10.244.1.0/24"]
	I0708 20:24:24.468241       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:24:26.957178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.447029ms"
	I0708 20:24:26.973276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.029062ms"
	I0708 20:24:26.974445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.491µs"
	I0708 20:24:26.984402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.259µs"
	I0708 20:24:26.988419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="134.314µs"
	I0708 20:24:28.649651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.802441ms"
	I0708 20:24:28.649981       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.721µs"
	I0708 20:24:29.213176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.364265ms"
	I0708 20:24:29.213419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.471µs"
	I0708 20:25:02.630099       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:25:02.633439       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-957088-m03\" does not exist"
	I0708 20:25:02.668817       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-957088-m03" podCIDRs=["10.244.2.0/24"]
	I0708 20:25:06.853126       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-957088-m03"
	I0708 20:25:10.704305       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:25:39.397372       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:25:40.554526       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-957088-m03\" does not exist"
	I0708 20:25:40.555227       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:25:40.570766       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-957088-m03" podCIDRs=["10.244.3.0/24"]
	I0708 20:25:47.733320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:26:31.905493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m03"
	I0708 20:26:31.966992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.015186ms"
	I0708 20:26:31.967459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.187µs"
	
	
	==> kube-controller-manager [e5da15967827256185eb2546419913d851533e4e51e34d1f698de18415004dda] <==
	I0708 20:29:48.941418       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 20:29:49.016335       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 20:29:49.016422       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0708 20:30:12.534457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.176315ms"
	I0708 20:30:12.543154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.450463ms"
	I0708 20:30:12.543259       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.366µs"
	I0708 20:30:16.848582       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-957088-m02\" does not exist"
	I0708 20:30:16.874000       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-957088-m02" podCIDRs=["10.244.1.0/24"]
	I0708 20:30:18.691219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.171µs"
	I0708 20:30:18.752145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.624µs"
	I0708 20:30:18.798402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.413µs"
	I0708 20:30:18.808384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.619µs"
	I0708 20:30:18.815490       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.239µs"
	I0708 20:30:18.822380       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.771µs"
	I0708 20:30:18.827086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="200.613µs"
	I0708 20:30:24.039850       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:30:24.057285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.326µs"
	I0708 20:30:24.070438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.91µs"
	I0708 20:30:26.311565       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.709611ms"
	I0708 20:30:26.312047       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.11µs"
	I0708 20:30:42.429570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:30:43.532529       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:30:43.532740       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-957088-m03\" does not exist"
	I0708 20:30:43.543461       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-957088-m03" podCIDRs=["10.244.2.0/24"]
	I0708 20:30:57.167430       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	
	
	==> kube-proxy [546831c23c80e430aaab6e2a857e677f729f9290a275710847b09a7e355390e2] <==
	I0708 20:29:36.800647       1 server_linux.go:69] "Using iptables proxy"
	I0708 20:29:36.831690       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.44"]
	I0708 20:29:36.909325       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 20:29:36.909378       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 20:29:36.909395       1 server_linux.go:165] "Using iptables Proxier"
	I0708 20:29:36.923227       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 20:29:36.923494       1 server.go:872] "Version info" version="v1.30.2"
	I0708 20:29:36.923522       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:29:36.925182       1 config.go:192] "Starting service config controller"
	I0708 20:29:36.925228       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 20:29:36.925255       1 config.go:101] "Starting endpoint slice config controller"
	I0708 20:29:36.925277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 20:29:36.925920       1 config.go:319] "Starting node config controller"
	I0708 20:29:36.925946       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 20:29:37.026169       1 shared_informer.go:320] Caches are synced for node config
	I0708 20:29:37.026252       1 shared_informer.go:320] Caches are synced for service config
	I0708 20:29:37.026290       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c] <==
	I0708 20:23:43.492668       1 server_linux.go:69] "Using iptables proxy"
	I0708 20:23:43.515278       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.44"]
	I0708 20:23:43.567583       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 20:23:43.567753       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 20:23:43.567784       1 server_linux.go:165] "Using iptables Proxier"
	I0708 20:23:43.570488       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 20:23:43.570755       1 server.go:872] "Version info" version="v1.30.2"
	I0708 20:23:43.570928       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:23:43.572241       1 config.go:192] "Starting service config controller"
	I0708 20:23:43.572289       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 20:23:43.572327       1 config.go:101] "Starting endpoint slice config controller"
	I0708 20:23:43.572343       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 20:23:43.573210       1 config.go:319] "Starting node config controller"
	I0708 20:23:43.573270       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 20:23:43.673102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 20:23:43.673143       1 shared_informer.go:320] Caches are synced for service config
	I0708 20:23:43.673361       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5b516f0a686a5925ebc0bd4ea92a8b6383cf03e4469d7478996644bdea1e54bb] <==
	I0708 20:29:33.578160       1 serving.go:380] Generated self-signed cert in-memory
	W0708 20:29:35.471574       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 20:29:35.471675       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 20:29:35.471745       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 20:29:35.471770       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 20:29:35.503159       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 20:29:35.503986       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:29:35.510645       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 20:29:35.510820       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 20:29:35.510858       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 20:29:35.510891       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0708 20:29:35.533850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 20:29:35.552667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 20:29:35.535228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 20:29:35.552739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0708 20:29:35.535371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 20:29:35.552755       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0708 20:29:35.614143       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375] <==
	E0708 20:23:26.468733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 20:23:26.468044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:23:26.468751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 20:23:26.468086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 20:23:26.468764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 20:23:27.275883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 20:23:27.275933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 20:23:27.381926       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 20:23:27.381987       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 20:23:27.391307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 20:23:27.391424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 20:23:27.583696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 20:23:27.583804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 20:23:27.647641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 20:23:27.647785       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 20:23:27.652294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 20:23:27.652415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 20:23:27.693084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:23:27.693229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 20:23:27.706240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0708 20:23:27.706339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0708 20:23:27.727525       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 20:23:27.727708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0708 20:23:30.258112       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0708 20:27:53.346370       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 08 20:29:32 multinode-957088 kubelet[3042]: W0708 20:29:32.955338    3042 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.44:8443: connect: connection refused
	Jul 08 20:29:32 multinode-957088 kubelet[3042]: E0708 20:29:32.956333    3042 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.44:8443: connect: connection refused
	Jul 08 20:29:33 multinode-957088 kubelet[3042]: I0708 20:29:33.366552    3042 kubelet_node_status.go:73] "Attempting to register node" node="multinode-957088"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.566746    3042 kubelet_node_status.go:112] "Node was previously registered" node="multinode-957088"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.567164    3042 kubelet_node_status.go:76] "Successfully registered node" node="multinode-957088"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.570877    3042 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.572535    3042 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.838534    3042 apiserver.go:52] "Watching apiserver"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.842487    3042 topology_manager.go:215] "Topology Admit Handler" podUID="26461f24-d94c-4eaa-bfa7-0633c4c556e8" podNamespace="kube-system" podName="kindnet-9t7dr"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.843485    3042 topology_manager.go:215] "Topology Admit Handler" podUID="804d9347-ea15-4821-819b-d84244caf4a9" podNamespace="kube-system" podName="kube-proxy-gfhs4"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.843722    3042 topology_manager.go:215] "Topology Admit Handler" podUID="26175213-712f-41f8-b39b-ba4691346d29" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v92sb"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.843972    3042 topology_manager.go:215] "Topology Admit Handler" podUID="4a1bfd34-af18-42f9-92c6-e5a902ca9229" podNamespace="kube-system" podName="storage-provisioner"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.844077    3042 topology_manager.go:215] "Topology Admit Handler" podUID="c920ac4a-fa2f-4e6a-a937-650806f738ad" podNamespace="default" podName="busybox-fc5497c4f-fqkrd"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.859054    3042 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.871552    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/26461f24-d94c-4eaa-bfa7-0633c4c556e8-cni-cfg\") pod \"kindnet-9t7dr\" (UID: \"26461f24-d94c-4eaa-bfa7-0633c4c556e8\") " pod="kube-system/kindnet-9t7dr"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.871782    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26461f24-d94c-4eaa-bfa7-0633c4c556e8-xtables-lock\") pod \"kindnet-9t7dr\" (UID: \"26461f24-d94c-4eaa-bfa7-0633c4c556e8\") " pod="kube-system/kindnet-9t7dr"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.872010    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26461f24-d94c-4eaa-bfa7-0633c4c556e8-lib-modules\") pod \"kindnet-9t7dr\" (UID: \"26461f24-d94c-4eaa-bfa7-0633c4c556e8\") " pod="kube-system/kindnet-9t7dr"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.872109    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/804d9347-ea15-4821-819b-d84244caf4a9-xtables-lock\") pod \"kube-proxy-gfhs4\" (UID: \"804d9347-ea15-4821-819b-d84244caf4a9\") " pod="kube-system/kube-proxy-gfhs4"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.872286    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/804d9347-ea15-4821-819b-d84244caf4a9-lib-modules\") pod \"kube-proxy-gfhs4\" (UID: \"804d9347-ea15-4821-819b-d84244caf4a9\") " pod="kube-system/kube-proxy-gfhs4"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.872383    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4a1bfd34-af18-42f9-92c6-e5a902ca9229-tmp\") pod \"storage-provisioner\" (UID: \"4a1bfd34-af18-42f9-92c6-e5a902ca9229\") " pod="kube-system/storage-provisioner"
	Jul 08 20:30:31 multinode-957088 kubelet[3042]: E0708 20:30:31.925789    3042 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:30:31 multinode-957088 kubelet[3042]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:30:31 multinode-957088 kubelet[3042]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:30:31 multinode-957088 kubelet[3042]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:30:31 multinode-957088 kubelet[3042]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:30:59.782480   44891 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19195-5988/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-957088 -n multinode-957088
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-957088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (311.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 stop
E0708 20:31:12.784516   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 20:31:29.734746   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 20:32:26.894771   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-957088 stop: exit status 82 (2m0.472489699s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-957088-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-957088 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-957088 status: exit status 3 (18.713357315s)

                                                
                                                
-- stdout --
	multinode-957088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-957088-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:33:23.203821   45557 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0708 20:33:23.203866   45557 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-957088 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-957088 -n multinode-957088
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-957088 logs -n 25: (1.526917912s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m02:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088:/home/docker/cp-test_multinode-957088-m02_multinode-957088.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n multinode-957088 sudo cat                                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /home/docker/cp-test_multinode-957088-m02_multinode-957088.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m02:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03:/home/docker/cp-test_multinode-957088-m02_multinode-957088-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n multinode-957088-m03 sudo cat                                   | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /home/docker/cp-test_multinode-957088-m02_multinode-957088-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp testdata/cp-test.txt                                                | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m03:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4089420253/001/cp-test_multinode-957088-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m03:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088:/home/docker/cp-test_multinode-957088-m03_multinode-957088.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n multinode-957088 sudo cat                                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /home/docker/cp-test_multinode-957088-m03_multinode-957088.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m03:/home/docker/cp-test.txt                       | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m02:/home/docker/cp-test_multinode-957088-m03_multinode-957088-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n multinode-957088-m02 sudo cat                                   | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /home/docker/cp-test_multinode-957088-m03_multinode-957088-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-957088 node stop m03                                                          | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	| node    | multinode-957088 node start                                                             | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-957088                                                                | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC |                     |
	| stop    | -p multinode-957088                                                                     | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC |                     |
	| start   | -p multinode-957088                                                                     | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:27 UTC | 08 Jul 24 20:30 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-957088                                                                | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:30 UTC |                     |
	| node    | multinode-957088 node delete                                                            | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:31 UTC | 08 Jul 24 20:31 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-957088 stop                                                                   | multinode-957088 | jenkins | v1.33.1 | 08 Jul 24 20:31 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:27:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:27:52.506407   43874 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:27:52.506677   43874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:27:52.506686   43874 out.go:304] Setting ErrFile to fd 2...
	I0708 20:27:52.506691   43874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:27:52.506879   43874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:27:52.507403   43874 out.go:298] Setting JSON to false
	I0708 20:27:52.508243   43874 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4221,"bootTime":1720466251,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:27:52.508298   43874 start.go:139] virtualization: kvm guest
	I0708 20:27:52.510536   43874 out.go:177] * [multinode-957088] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:27:52.511793   43874 notify.go:220] Checking for updates...
	I0708 20:27:52.511803   43874 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:27:52.513036   43874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:27:52.514502   43874 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:27:52.515756   43874 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:27:52.516985   43874 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:27:52.518363   43874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:27:52.520130   43874 config.go:182] Loaded profile config "multinode-957088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:27:52.520213   43874 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:27:52.520604   43874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:27:52.520682   43874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:27:52.535899   43874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0708 20:27:52.536385   43874 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:27:52.536932   43874 main.go:141] libmachine: Using API Version  1
	I0708 20:27:52.536955   43874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:27:52.537340   43874 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:27:52.537534   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:27:52.572176   43874 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:27:52.573354   43874 start.go:297] selected driver: kvm2
	I0708 20:27:52.573367   43874 start.go:901] validating driver "kvm2" against &{Name:multinode-957088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-957088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:27:52.573496   43874 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:27:52.573803   43874 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:27:52.573871   43874 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:27:52.588344   43874 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:27:52.588968   43874 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:27:52.589023   43874 cni.go:84] Creating CNI manager for ""
	I0708 20:27:52.589035   43874 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0708 20:27:52.589083   43874 start.go:340] cluster config:
	{Name:multinode-957088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-957088 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:27:52.589198   43874 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:27:52.590924   43874 out.go:177] * Starting "multinode-957088" primary control-plane node in "multinode-957088" cluster
	I0708 20:27:52.592097   43874 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:27:52.592134   43874 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 20:27:52.592140   43874 cache.go:56] Caching tarball of preloaded images
	I0708 20:27:52.592206   43874 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:27:52.592216   43874 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 20:27:52.592318   43874 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/config.json ...
	I0708 20:27:52.592491   43874 start.go:360] acquireMachinesLock for multinode-957088: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:27:52.592528   43874 start.go:364] duration metric: took 20.835µs to acquireMachinesLock for "multinode-957088"
	I0708 20:27:52.592542   43874 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:27:52.592553   43874 fix.go:54] fixHost starting: 
	I0708 20:27:52.592792   43874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:27:52.592818   43874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:27:52.606901   43874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I0708 20:27:52.607318   43874 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:27:52.607779   43874 main.go:141] libmachine: Using API Version  1
	I0708 20:27:52.607802   43874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:27:52.608196   43874 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:27:52.608509   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:27:52.608689   43874 main.go:141] libmachine: (multinode-957088) Calling .GetState
	I0708 20:27:52.610207   43874 fix.go:112] recreateIfNeeded on multinode-957088: state=Running err=<nil>
	W0708 20:27:52.610236   43874 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:27:52.612228   43874 out.go:177] * Updating the running kvm2 "multinode-957088" VM ...
	I0708 20:27:52.613578   43874 machine.go:94] provisionDockerMachine start ...
	I0708 20:27:52.613599   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:27:52.613799   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:52.616165   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.616565   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:52.616600   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.616720   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:27:52.616871   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.617056   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.617199   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:27:52.617381   43874 main.go:141] libmachine: Using SSH client type: native
	I0708 20:27:52.617620   43874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0708 20:27:52.617632   43874 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:27:52.728870   43874 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-957088
	
	I0708 20:27:52.728904   43874 main.go:141] libmachine: (multinode-957088) Calling .GetMachineName
	I0708 20:27:52.729203   43874 buildroot.go:166] provisioning hostname "multinode-957088"
	I0708 20:27:52.729227   43874 main.go:141] libmachine: (multinode-957088) Calling .GetMachineName
	I0708 20:27:52.729410   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:52.732202   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.732510   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:52.732542   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.732689   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:27:52.732886   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.733084   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.733281   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:27:52.733458   43874 main.go:141] libmachine: Using SSH client type: native
	I0708 20:27:52.733607   43874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0708 20:27:52.733619   43874 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-957088 && echo "multinode-957088" | sudo tee /etc/hostname
	I0708 20:27:52.856905   43874 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-957088
	
	I0708 20:27:52.856934   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:52.859429   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.859762   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:52.859790   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.859924   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:27:52.860110   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.860246   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:52.860342   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:27:52.860468   43874 main.go:141] libmachine: Using SSH client type: native
	I0708 20:27:52.860735   43874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0708 20:27:52.860766   43874 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-957088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-957088/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-957088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:27:52.968499   43874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:27:52.968532   43874 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:27:52.968554   43874 buildroot.go:174] setting up certificates
	I0708 20:27:52.968566   43874 provision.go:84] configureAuth start
	I0708 20:27:52.968577   43874 main.go:141] libmachine: (multinode-957088) Calling .GetMachineName
	I0708 20:27:52.968886   43874 main.go:141] libmachine: (multinode-957088) Calling .GetIP
	I0708 20:27:52.971504   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.971859   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:52.971881   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.972029   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:52.974255   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.974559   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:52.974587   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:52.974750   43874 provision.go:143] copyHostCerts
	I0708 20:27:52.974778   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:27:52.974804   43874 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:27:52.974813   43874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:27:52.974884   43874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:27:52.974963   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:27:52.974979   43874 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:27:52.974985   43874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:27:52.975008   43874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:27:52.975046   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:27:52.975061   43874 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:27:52.975067   43874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:27:52.975086   43874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:27:52.975138   43874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.multinode-957088 san=[127.0.0.1 192.168.39.44 localhost minikube multinode-957088]
	I0708 20:27:53.029975   43874 provision.go:177] copyRemoteCerts
	I0708 20:27:53.030024   43874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:27:53.030049   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:53.032505   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:53.032868   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:53.032900   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:53.033086   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:27:53.033279   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:53.033416   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:27:53.033547   43874 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/multinode-957088/id_rsa Username:docker}
	I0708 20:27:53.118390   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 20:27:53.118449   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:27:53.145389   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 20:27:53.145495   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0708 20:27:53.172386   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 20:27:53.172462   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:27:53.198469   43874 provision.go:87] duration metric: took 229.879254ms to configureAuth
	I0708 20:27:53.198501   43874 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:27:53.198745   43874 config.go:182] Loaded profile config "multinode-957088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:27:53.198823   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:27:53.201225   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:53.201625   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:27:53.201657   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:27:53.201818   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:27:53.202005   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:53.202151   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:27:53.202320   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:27:53.202484   43874 main.go:141] libmachine: Using SSH client type: native
	I0708 20:27:53.202632   43874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0708 20:27:53.202646   43874 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:29:23.921944   43874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:29:23.921975   43874 machine.go:97] duration metric: took 1m31.30838381s to provisionDockerMachine
	I0708 20:29:23.921989   43874 start.go:293] postStartSetup for "multinode-957088" (driver="kvm2")
	I0708 20:29:23.921999   43874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:29:23.922049   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:29:23.922373   43874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:29:23.922397   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:29:23.925538   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:23.925913   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:23.925938   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:23.926070   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:29:23.926271   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:29:23.926425   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:29:23.926571   43874 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/multinode-957088/id_rsa Username:docker}
	I0708 20:29:24.013495   43874 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:29:24.017847   43874 command_runner.go:130] > NAME=Buildroot
	I0708 20:29:24.017873   43874 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0708 20:29:24.017879   43874 command_runner.go:130] > ID=buildroot
	I0708 20:29:24.017886   43874 command_runner.go:130] > VERSION_ID=2023.02.9
	I0708 20:29:24.017892   43874 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0708 20:29:24.017950   43874 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:29:24.017967   43874 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:29:24.018025   43874 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:29:24.018124   43874 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:29:24.018136   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /etc/ssl/certs/131412.pem
	I0708 20:29:24.018248   43874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:29:24.028598   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:29:24.071657   43874 start.go:296] duration metric: took 149.638569ms for postStartSetup
	I0708 20:29:24.071704   43874 fix.go:56] duration metric: took 1m31.479153778s for fixHost
	I0708 20:29:24.071727   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:29:24.074668   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.075177   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:24.075207   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.075370   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:29:24.075568   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:29:24.075724   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:29:24.075862   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:29:24.076034   43874 main.go:141] libmachine: Using SSH client type: native
	I0708 20:29:24.076225   43874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0708 20:29:24.076235   43874 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:29:24.184353   43874 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720470564.154325657
	
	I0708 20:29:24.184373   43874 fix.go:216] guest clock: 1720470564.154325657
	I0708 20:29:24.184381   43874 fix.go:229] Guest: 2024-07-08 20:29:24.154325657 +0000 UTC Remote: 2024-07-08 20:29:24.071708715 +0000 UTC m=+91.599039386 (delta=82.616942ms)
	I0708 20:29:24.184419   43874 fix.go:200] guest clock delta is within tolerance: 82.616942ms
	I0708 20:29:24.184428   43874 start.go:83] releasing machines lock for "multinode-957088", held for 1m31.591890954s
	I0708 20:29:24.184515   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:29:24.184806   43874 main.go:141] libmachine: (multinode-957088) Calling .GetIP
	I0708 20:29:24.187104   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.187440   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:24.187485   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.187622   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:29:24.188163   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:29:24.188335   43874 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:29:24.188433   43874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:29:24.188469   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:29:24.188562   43874 ssh_runner.go:195] Run: cat /version.json
	I0708 20:29:24.188579   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:29:24.191033   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.191262   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.191535   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:24.191583   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.191696   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:29:24.191825   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:29:24.191878   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:24.191911   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:24.191946   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:29:24.192035   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:29:24.192099   43874 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/multinode-957088/id_rsa Username:docker}
	I0708 20:29:24.192154   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:29:24.192265   43874 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:29:24.192380   43874 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/multinode-957088/id_rsa Username:docker}
	I0708 20:29:24.268638   43874 command_runner.go:130] > {"iso_version": "v1.33.1-1720011972-19186", "kicbase_version": "v0.0.44-1719972989-19184", "minikube_version": "v1.33.1", "commit": "31623406c84ecd024e1cf2c4d9dbac99bd5bb2b3"}
	I0708 20:29:24.268882   43874 ssh_runner.go:195] Run: systemctl --version
	I0708 20:29:24.292153   43874 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0708 20:29:24.292906   43874 command_runner.go:130] > systemd 252 (252)
	I0708 20:29:24.292951   43874 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0708 20:29:24.293010   43874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:29:24.455755   43874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0708 20:29:24.462332   43874 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0708 20:29:24.462516   43874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:29:24.462568   43874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:29:24.471992   43874 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0708 20:29:24.472017   43874 start.go:494] detecting cgroup driver to use...
	I0708 20:29:24.472084   43874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:29:24.488420   43874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:29:24.502410   43874 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:29:24.502472   43874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:29:24.516277   43874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:29:24.530250   43874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:29:24.673910   43874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:29:24.816353   43874 docker.go:233] disabling docker service ...
	I0708 20:29:24.816410   43874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:29:24.832875   43874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:29:24.846837   43874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:29:24.986614   43874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:29:25.129398   43874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:29:25.144230   43874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:29:25.164306   43874 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0708 20:29:25.164359   43874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:29:25.164423   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.176254   43874 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:29:25.176317   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.187522   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.198819   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.209967   43874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:29:25.221585   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.233026   43874 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.245503   43874 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:29:25.257042   43874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:29:25.267287   43874 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0708 20:29:25.267363   43874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:29:25.277423   43874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:29:25.425906   43874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:29:29.219283   43874 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.793339419s)
	I0708 20:29:29.219319   43874 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:29:29.219369   43874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:29:29.224792   43874 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0708 20:29:29.224817   43874 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0708 20:29:29.224824   43874 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I0708 20:29:29.224830   43874 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0708 20:29:29.224835   43874 command_runner.go:130] > Access: 2024-07-08 20:29:29.053131358 +0000
	I0708 20:29:29.224840   43874 command_runner.go:130] > Modify: 2024-07-08 20:29:29.053131358 +0000
	I0708 20:29:29.224845   43874 command_runner.go:130] > Change: 2024-07-08 20:29:29.053131358 +0000
	I0708 20:29:29.224848   43874 command_runner.go:130] >  Birth: -
	I0708 20:29:29.224998   43874 start.go:562] Will wait 60s for crictl version
	I0708 20:29:29.225073   43874 ssh_runner.go:195] Run: which crictl
	I0708 20:29:29.229434   43874 command_runner.go:130] > /usr/bin/crictl
	I0708 20:29:29.229520   43874 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:29:29.272608   43874 command_runner.go:130] > Version:  0.1.0
	I0708 20:29:29.272636   43874 command_runner.go:130] > RuntimeName:  cri-o
	I0708 20:29:29.272641   43874 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0708 20:29:29.272648   43874 command_runner.go:130] > RuntimeApiVersion:  v1
	I0708 20:29:29.272673   43874 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:29:29.272761   43874 ssh_runner.go:195] Run: crio --version
	I0708 20:29:29.307075   43874 command_runner.go:130] > crio version 1.29.1
	I0708 20:29:29.307102   43874 command_runner.go:130] > Version:        1.29.1
	I0708 20:29:29.307116   43874 command_runner.go:130] > GitCommit:      unknown
	I0708 20:29:29.307124   43874 command_runner.go:130] > GitCommitDate:  unknown
	I0708 20:29:29.307131   43874 command_runner.go:130] > GitTreeState:   clean
	I0708 20:29:29.307140   43874 command_runner.go:130] > BuildDate:      2024-07-03T18:31:34Z
	I0708 20:29:29.307147   43874 command_runner.go:130] > GoVersion:      go1.21.6
	I0708 20:29:29.307153   43874 command_runner.go:130] > Compiler:       gc
	I0708 20:29:29.307160   43874 command_runner.go:130] > Platform:       linux/amd64
	I0708 20:29:29.307179   43874 command_runner.go:130] > Linkmode:       dynamic
	I0708 20:29:29.307185   43874 command_runner.go:130] > BuildTags:      
	I0708 20:29:29.307196   43874 command_runner.go:130] >   containers_image_ostree_stub
	I0708 20:29:29.307203   43874 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0708 20:29:29.307210   43874 command_runner.go:130] >   btrfs_noversion
	I0708 20:29:29.307216   43874 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0708 20:29:29.307225   43874 command_runner.go:130] >   libdm_no_deferred_remove
	I0708 20:29:29.307228   43874 command_runner.go:130] >   seccomp
	I0708 20:29:29.307232   43874 command_runner.go:130] > LDFlags:          unknown
	I0708 20:29:29.307236   43874 command_runner.go:130] > SeccompEnabled:   true
	I0708 20:29:29.307240   43874 command_runner.go:130] > AppArmorEnabled:  false
	I0708 20:29:29.307303   43874 ssh_runner.go:195] Run: crio --version
	I0708 20:29:29.337383   43874 command_runner.go:130] > crio version 1.29.1
	I0708 20:29:29.337405   43874 command_runner.go:130] > Version:        1.29.1
	I0708 20:29:29.337410   43874 command_runner.go:130] > GitCommit:      unknown
	I0708 20:29:29.337414   43874 command_runner.go:130] > GitCommitDate:  unknown
	I0708 20:29:29.337418   43874 command_runner.go:130] > GitTreeState:   clean
	I0708 20:29:29.337423   43874 command_runner.go:130] > BuildDate:      2024-07-03T18:31:34Z
	I0708 20:29:29.337427   43874 command_runner.go:130] > GoVersion:      go1.21.6
	I0708 20:29:29.337431   43874 command_runner.go:130] > Compiler:       gc
	I0708 20:29:29.337435   43874 command_runner.go:130] > Platform:       linux/amd64
	I0708 20:29:29.337440   43874 command_runner.go:130] > Linkmode:       dynamic
	I0708 20:29:29.337444   43874 command_runner.go:130] > BuildTags:      
	I0708 20:29:29.337448   43874 command_runner.go:130] >   containers_image_ostree_stub
	I0708 20:29:29.337452   43874 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0708 20:29:29.337456   43874 command_runner.go:130] >   btrfs_noversion
	I0708 20:29:29.337463   43874 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0708 20:29:29.337469   43874 command_runner.go:130] >   libdm_no_deferred_remove
	I0708 20:29:29.337473   43874 command_runner.go:130] >   seccomp
	I0708 20:29:29.337479   43874 command_runner.go:130] > LDFlags:          unknown
	I0708 20:29:29.337491   43874 command_runner.go:130] > SeccompEnabled:   true
	I0708 20:29:29.337496   43874 command_runner.go:130] > AppArmorEnabled:  false
	I0708 20:29:29.340162   43874 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:29:29.341543   43874 main.go:141] libmachine: (multinode-957088) Calling .GetIP
	I0708 20:29:29.344351   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:29.344741   43874 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:29:29.344770   43874 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:29:29.344905   43874 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:29:29.349312   43874 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0708 20:29:29.349469   43874 kubeadm.go:877] updating cluster {Name:multinode-957088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-957088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:29:29.349610   43874 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:29:29.349659   43874 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:29:29.397121   43874 command_runner.go:130] > {
	I0708 20:29:29.397146   43874 command_runner.go:130] >   "images": [
	I0708 20:29:29.397152   43874 command_runner.go:130] >     {
	I0708 20:29:29.397165   43874 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0708 20:29:29.397172   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397204   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0708 20:29:29.397214   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397220   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.397241   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0708 20:29:29.397255   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0708 20:29:29.397264   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397275   43874 command_runner.go:130] >       "size": "65908273",
	I0708 20:29:29.397282   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.397292   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.397305   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.397322   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.397331   43874 command_runner.go:130] >     },
	I0708 20:29:29.397336   43874 command_runner.go:130] >     {
	I0708 20:29:29.397350   43874 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0708 20:29:29.397361   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397372   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0708 20:29:29.397380   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397388   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.397402   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0708 20:29:29.397416   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0708 20:29:29.397424   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397431   43874 command_runner.go:130] >       "size": "1363676",
	I0708 20:29:29.397439   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.397448   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.397457   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.397465   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.397473   43874 command_runner.go:130] >     },
	I0708 20:29:29.397481   43874 command_runner.go:130] >     {
	I0708 20:29:29.397491   43874 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0708 20:29:29.397499   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397509   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0708 20:29:29.397517   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397523   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.397536   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0708 20:29:29.397549   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0708 20:29:29.397557   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397564   43874 command_runner.go:130] >       "size": "31470524",
	I0708 20:29:29.397572   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.397586   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.397595   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.397603   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.397617   43874 command_runner.go:130] >     },
	I0708 20:29:29.397624   43874 command_runner.go:130] >     {
	I0708 20:29:29.397632   43874 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0708 20:29:29.397641   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397652   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0708 20:29:29.397667   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397675   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.397688   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0708 20:29:29.397727   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0708 20:29:29.397736   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397743   43874 command_runner.go:130] >       "size": "61245718",
	I0708 20:29:29.397751   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.397760   43874 command_runner.go:130] >       "username": "nonroot",
	I0708 20:29:29.397769   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.397775   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.397784   43874 command_runner.go:130] >     },
	I0708 20:29:29.397792   43874 command_runner.go:130] >     {
	I0708 20:29:29.397804   43874 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0708 20:29:29.397813   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397823   43874 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0708 20:29:29.397832   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397839   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.397852   43874 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0708 20:29:29.397866   43874 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0708 20:29:29.397875   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397884   43874 command_runner.go:130] >       "size": "150779692",
	I0708 20:29:29.397893   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.397900   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.397907   43874 command_runner.go:130] >       },
	I0708 20:29:29.397917   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.397926   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.397935   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.397943   43874 command_runner.go:130] >     },
	I0708 20:29:29.397951   43874 command_runner.go:130] >     {
	I0708 20:29:29.397960   43874 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0708 20:29:29.397968   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.397981   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0708 20:29:29.397988   43874 command_runner.go:130] >       ],
	I0708 20:29:29.397993   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.398008   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0708 20:29:29.398023   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0708 20:29:29.398038   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398047   43874 command_runner.go:130] >       "size": "117609954",
	I0708 20:29:29.398052   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.398060   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.398065   43874 command_runner.go:130] >       },
	I0708 20:29:29.398073   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.398079   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.398087   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.398093   43874 command_runner.go:130] >     },
	I0708 20:29:29.398100   43874 command_runner.go:130] >     {
	I0708 20:29:29.398114   43874 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0708 20:29:29.398123   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.398135   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0708 20:29:29.398139   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398145   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.398157   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0708 20:29:29.398169   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0708 20:29:29.398177   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398187   43874 command_runner.go:130] >       "size": "112194888",
	I0708 20:29:29.398195   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.398203   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.398211   43874 command_runner.go:130] >       },
	I0708 20:29:29.398216   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.398224   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.398231   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.398238   43874 command_runner.go:130] >     },
	I0708 20:29:29.398242   43874 command_runner.go:130] >     {
	I0708 20:29:29.398254   43874 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0708 20:29:29.398263   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.398273   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0708 20:29:29.398280   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398286   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.398322   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0708 20:29:29.398337   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0708 20:29:29.398342   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398348   43874 command_runner.go:130] >       "size": "85953433",
	I0708 20:29:29.398360   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.398366   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.398372   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.398378   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.398383   43874 command_runner.go:130] >     },
	I0708 20:29:29.398387   43874 command_runner.go:130] >     {
	I0708 20:29:29.398396   43874 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0708 20:29:29.398402   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.398409   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0708 20:29:29.398415   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398421   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.398436   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0708 20:29:29.398450   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0708 20:29:29.398458   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398464   43874 command_runner.go:130] >       "size": "63051080",
	I0708 20:29:29.398472   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.398479   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.398487   43874 command_runner.go:130] >       },
	I0708 20:29:29.398496   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.398505   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.398514   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.398521   43874 command_runner.go:130] >     },
	I0708 20:29:29.398524   43874 command_runner.go:130] >     {
	I0708 20:29:29.398534   43874 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0708 20:29:29.398541   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.398545   43874 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0708 20:29:29.398549   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398553   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.398566   43874 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0708 20:29:29.398579   43874 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0708 20:29:29.398587   43874 command_runner.go:130] >       ],
	I0708 20:29:29.398594   43874 command_runner.go:130] >       "size": "750414",
	I0708 20:29:29.398604   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.398611   43874 command_runner.go:130] >         "value": "65535"
	I0708 20:29:29.398620   43874 command_runner.go:130] >       },
	I0708 20:29:29.398627   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.398639   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.398646   43874 command_runner.go:130] >       "pinned": true
	I0708 20:29:29.398649   43874 command_runner.go:130] >     }
	I0708 20:29:29.398655   43874 command_runner.go:130] >   ]
	I0708 20:29:29.398658   43874 command_runner.go:130] > }
	I0708 20:29:29.398876   43874 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:29:29.398900   43874 crio.go:433] Images already preloaded, skipping extraction
	I0708 20:29:29.398951   43874 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:29:29.435103   43874 command_runner.go:130] > {
	I0708 20:29:29.435133   43874 command_runner.go:130] >   "images": [
	I0708 20:29:29.435138   43874 command_runner.go:130] >     {
	I0708 20:29:29.435145   43874 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0708 20:29:29.435150   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435156   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0708 20:29:29.435160   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435164   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435179   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0708 20:29:29.435191   43874 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0708 20:29:29.435197   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435205   43874 command_runner.go:130] >       "size": "65908273",
	I0708 20:29:29.435215   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.435227   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.435241   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435252   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435268   43874 command_runner.go:130] >     },
	I0708 20:29:29.435277   43874 command_runner.go:130] >     {
	I0708 20:29:29.435287   43874 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0708 20:29:29.435306   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435320   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0708 20:29:29.435326   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435334   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435346   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0708 20:29:29.435357   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0708 20:29:29.435364   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435369   43874 command_runner.go:130] >       "size": "1363676",
	I0708 20:29:29.435375   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.435384   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.435392   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435396   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435403   43874 command_runner.go:130] >     },
	I0708 20:29:29.435407   43874 command_runner.go:130] >     {
	I0708 20:29:29.435415   43874 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0708 20:29:29.435422   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435427   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0708 20:29:29.435434   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435438   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435461   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0708 20:29:29.435479   43874 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0708 20:29:29.435489   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435496   43874 command_runner.go:130] >       "size": "31470524",
	I0708 20:29:29.435505   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.435510   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.435515   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435519   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435525   43874 command_runner.go:130] >     },
	I0708 20:29:29.435529   43874 command_runner.go:130] >     {
	I0708 20:29:29.435538   43874 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0708 20:29:29.435562   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435576   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0708 20:29:29.435583   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435594   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435612   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0708 20:29:29.435629   43874 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0708 20:29:29.435636   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435646   43874 command_runner.go:130] >       "size": "61245718",
	I0708 20:29:29.435659   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.435667   43874 command_runner.go:130] >       "username": "nonroot",
	I0708 20:29:29.435671   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435675   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435682   43874 command_runner.go:130] >     },
	I0708 20:29:29.435686   43874 command_runner.go:130] >     {
	I0708 20:29:29.435695   43874 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0708 20:29:29.435705   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435717   43874 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0708 20:29:29.435727   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435737   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435751   43874 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0708 20:29:29.435766   43874 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0708 20:29:29.435775   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435786   43874 command_runner.go:130] >       "size": "150779692",
	I0708 20:29:29.435795   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.435802   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.435806   43874 command_runner.go:130] >       },
	I0708 20:29:29.435813   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.435817   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435830   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435836   43874 command_runner.go:130] >     },
	I0708 20:29:29.435840   43874 command_runner.go:130] >     {
	I0708 20:29:29.435848   43874 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0708 20:29:29.435855   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435861   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0708 20:29:29.435867   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435872   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435882   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0708 20:29:29.435893   43874 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0708 20:29:29.435899   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435910   43874 command_runner.go:130] >       "size": "117609954",
	I0708 20:29:29.435917   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.435921   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.435927   43874 command_runner.go:130] >       },
	I0708 20:29:29.435931   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.435936   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.435940   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.435946   43874 command_runner.go:130] >     },
	I0708 20:29:29.435950   43874 command_runner.go:130] >     {
	I0708 20:29:29.435958   43874 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0708 20:29:29.435965   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.435971   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0708 20:29:29.435977   43874 command_runner.go:130] >       ],
	I0708 20:29:29.435981   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.435991   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0708 20:29:29.436002   43874 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0708 20:29:29.436008   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436013   43874 command_runner.go:130] >       "size": "112194888",
	I0708 20:29:29.436020   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.436024   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.436031   43874 command_runner.go:130] >       },
	I0708 20:29:29.436035   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.436042   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.436047   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.436053   43874 command_runner.go:130] >     },
	I0708 20:29:29.436057   43874 command_runner.go:130] >     {
	I0708 20:29:29.436063   43874 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0708 20:29:29.436069   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.436076   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0708 20:29:29.436084   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436088   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.436112   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0708 20:29:29.436122   43874 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0708 20:29:29.436128   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436133   43874 command_runner.go:130] >       "size": "85953433",
	I0708 20:29:29.436139   43874 command_runner.go:130] >       "uid": null,
	I0708 20:29:29.436148   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.436156   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.436160   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.436166   43874 command_runner.go:130] >     },
	I0708 20:29:29.436170   43874 command_runner.go:130] >     {
	I0708 20:29:29.436179   43874 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0708 20:29:29.436185   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.436191   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0708 20:29:29.436197   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436201   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.436211   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0708 20:29:29.436220   43874 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0708 20:29:29.436226   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436231   43874 command_runner.go:130] >       "size": "63051080",
	I0708 20:29:29.436234   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.436242   43874 command_runner.go:130] >         "value": "0"
	I0708 20:29:29.436246   43874 command_runner.go:130] >       },
	I0708 20:29:29.436256   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.436263   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.436267   43874 command_runner.go:130] >       "pinned": false
	I0708 20:29:29.436273   43874 command_runner.go:130] >     },
	I0708 20:29:29.436277   43874 command_runner.go:130] >     {
	I0708 20:29:29.436286   43874 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0708 20:29:29.436293   43874 command_runner.go:130] >       "repoTags": [
	I0708 20:29:29.436298   43874 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0708 20:29:29.436304   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436308   43874 command_runner.go:130] >       "repoDigests": [
	I0708 20:29:29.436318   43874 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0708 20:29:29.436325   43874 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0708 20:29:29.436332   43874 command_runner.go:130] >       ],
	I0708 20:29:29.436336   43874 command_runner.go:130] >       "size": "750414",
	I0708 20:29:29.436343   43874 command_runner.go:130] >       "uid": {
	I0708 20:29:29.436348   43874 command_runner.go:130] >         "value": "65535"
	I0708 20:29:29.436354   43874 command_runner.go:130] >       },
	I0708 20:29:29.436358   43874 command_runner.go:130] >       "username": "",
	I0708 20:29:29.436364   43874 command_runner.go:130] >       "spec": null,
	I0708 20:29:29.436375   43874 command_runner.go:130] >       "pinned": true
	I0708 20:29:29.436382   43874 command_runner.go:130] >     }
	I0708 20:29:29.436386   43874 command_runner.go:130] >   ]
	I0708 20:29:29.436392   43874 command_runner.go:130] > }
	I0708 20:29:29.436516   43874 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:29:29.436530   43874 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:29:29.436537   43874 kubeadm.go:928] updating node { 192.168.39.44 8443 v1.30.2 crio true true} ...
	I0708 20:29:29.436646   43874 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-957088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-957088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:29:29.436721   43874 ssh_runner.go:195] Run: crio config
	I0708 20:29:29.471430   43874 command_runner.go:130] ! time="2024-07-08 20:29:29.440929795Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0708 20:29:29.476995   43874 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0708 20:29:29.489477   43874 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0708 20:29:29.489505   43874 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0708 20:29:29.489516   43874 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0708 20:29:29.489521   43874 command_runner.go:130] > #
	I0708 20:29:29.489531   43874 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0708 20:29:29.489541   43874 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0708 20:29:29.489553   43874 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0708 20:29:29.489566   43874 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0708 20:29:29.489574   43874 command_runner.go:130] > # reload'.
	I0708 20:29:29.489585   43874 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0708 20:29:29.489597   43874 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0708 20:29:29.489610   43874 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0708 20:29:29.489622   43874 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0708 20:29:29.489630   43874 command_runner.go:130] > [crio]
	I0708 20:29:29.489639   43874 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0708 20:29:29.489649   43874 command_runner.go:130] > # containers images, in this directory.
	I0708 20:29:29.489658   43874 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0708 20:29:29.489677   43874 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0708 20:29:29.489687   43874 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0708 20:29:29.489701   43874 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0708 20:29:29.489710   43874 command_runner.go:130] > # imagestore = ""
	I0708 20:29:29.489723   43874 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0708 20:29:29.489735   43874 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0708 20:29:29.489743   43874 command_runner.go:130] > storage_driver = "overlay"
	I0708 20:29:29.489755   43874 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0708 20:29:29.489766   43874 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0708 20:29:29.489775   43874 command_runner.go:130] > storage_option = [
	I0708 20:29:29.489785   43874 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0708 20:29:29.489793   43874 command_runner.go:130] > ]
	I0708 20:29:29.489804   43874 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0708 20:29:29.489816   43874 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0708 20:29:29.489846   43874 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0708 20:29:29.489858   43874 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0708 20:29:29.489868   43874 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0708 20:29:29.489877   43874 command_runner.go:130] > # always happen on a node reboot
	I0708 20:29:29.489884   43874 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0708 20:29:29.489895   43874 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0708 20:29:29.489903   43874 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0708 20:29:29.489908   43874 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0708 20:29:29.489916   43874 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0708 20:29:29.489923   43874 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0708 20:29:29.489933   43874 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0708 20:29:29.489939   43874 command_runner.go:130] > # internal_wipe = true
	I0708 20:29:29.489947   43874 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0708 20:29:29.489954   43874 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0708 20:29:29.489959   43874 command_runner.go:130] > # internal_repair = false
	I0708 20:29:29.489964   43874 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0708 20:29:29.489972   43874 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0708 20:29:29.489980   43874 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0708 20:29:29.489985   43874 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0708 20:29:29.489992   43874 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0708 20:29:29.489996   43874 command_runner.go:130] > [crio.api]
	I0708 20:29:29.490003   43874 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0708 20:29:29.490008   43874 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0708 20:29:29.490015   43874 command_runner.go:130] > # IP address on which the stream server will listen.
	I0708 20:29:29.490019   43874 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0708 20:29:29.490028   43874 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0708 20:29:29.490035   43874 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0708 20:29:29.490039   43874 command_runner.go:130] > # stream_port = "0"
	I0708 20:29:29.490045   43874 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0708 20:29:29.490051   43874 command_runner.go:130] > # stream_enable_tls = false
	I0708 20:29:29.490056   43874 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0708 20:29:29.490063   43874 command_runner.go:130] > # stream_idle_timeout = ""
	I0708 20:29:29.490069   43874 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0708 20:29:29.490075   43874 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0708 20:29:29.490081   43874 command_runner.go:130] > # minutes.
	I0708 20:29:29.490085   43874 command_runner.go:130] > # stream_tls_cert = ""
	I0708 20:29:29.490098   43874 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0708 20:29:29.490107   43874 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0708 20:29:29.490118   43874 command_runner.go:130] > # stream_tls_key = ""
	I0708 20:29:29.490126   43874 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0708 20:29:29.490133   43874 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0708 20:29:29.490154   43874 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0708 20:29:29.490161   43874 command_runner.go:130] > # stream_tls_ca = ""
	I0708 20:29:29.490174   43874 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0708 20:29:29.490181   43874 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0708 20:29:29.490187   43874 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0708 20:29:29.490194   43874 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0708 20:29:29.490200   43874 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0708 20:29:29.490207   43874 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0708 20:29:29.490211   43874 command_runner.go:130] > [crio.runtime]
	I0708 20:29:29.490219   43874 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0708 20:29:29.490226   43874 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0708 20:29:29.490230   43874 command_runner.go:130] > # "nofile=1024:2048"
	I0708 20:29:29.490238   43874 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0708 20:29:29.490244   43874 command_runner.go:130] > # default_ulimits = [
	I0708 20:29:29.490247   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490253   43874 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0708 20:29:29.490259   43874 command_runner.go:130] > # no_pivot = false
	I0708 20:29:29.490264   43874 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0708 20:29:29.490272   43874 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0708 20:29:29.490280   43874 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0708 20:29:29.490285   43874 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0708 20:29:29.490293   43874 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0708 20:29:29.490299   43874 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0708 20:29:29.490305   43874 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0708 20:29:29.490310   43874 command_runner.go:130] > # Cgroup setting for conmon
	I0708 20:29:29.490318   43874 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0708 20:29:29.490323   43874 command_runner.go:130] > conmon_cgroup = "pod"
	I0708 20:29:29.490330   43874 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0708 20:29:29.490337   43874 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0708 20:29:29.490343   43874 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0708 20:29:29.490348   43874 command_runner.go:130] > conmon_env = [
	I0708 20:29:29.490358   43874 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0708 20:29:29.490364   43874 command_runner.go:130] > ]
	I0708 20:29:29.490369   43874 command_runner.go:130] > # Additional environment variables to set for all the
	I0708 20:29:29.490376   43874 command_runner.go:130] > # containers. These are overridden if set in the
	I0708 20:29:29.490381   43874 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0708 20:29:29.490387   43874 command_runner.go:130] > # default_env = [
	I0708 20:29:29.490390   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490398   43874 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0708 20:29:29.490405   43874 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0708 20:29:29.490411   43874 command_runner.go:130] > # selinux = false
	I0708 20:29:29.490417   43874 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0708 20:29:29.490425   43874 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0708 20:29:29.490431   43874 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0708 20:29:29.490437   43874 command_runner.go:130] > # seccomp_profile = ""
	I0708 20:29:29.490442   43874 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0708 20:29:29.490450   43874 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0708 20:29:29.490460   43874 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0708 20:29:29.490466   43874 command_runner.go:130] > # which might increase security.
	I0708 20:29:29.490471   43874 command_runner.go:130] > # This option is currently deprecated,
	I0708 20:29:29.490479   43874 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0708 20:29:29.490486   43874 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0708 20:29:29.490492   43874 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0708 20:29:29.490500   43874 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0708 20:29:29.490508   43874 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0708 20:29:29.490516   43874 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0708 20:29:29.490523   43874 command_runner.go:130] > # This option supports live configuration reload.
	I0708 20:29:29.490528   43874 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0708 20:29:29.490536   43874 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0708 20:29:29.490542   43874 command_runner.go:130] > # the cgroup blockio controller.
	I0708 20:29:29.490546   43874 command_runner.go:130] > # blockio_config_file = ""
	I0708 20:29:29.490555   43874 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0708 20:29:29.490561   43874 command_runner.go:130] > # blockio parameters.
	I0708 20:29:29.490564   43874 command_runner.go:130] > # blockio_reload = false
	I0708 20:29:29.490572   43874 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0708 20:29:29.490578   43874 command_runner.go:130] > # irqbalance daemon.
	I0708 20:29:29.490583   43874 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0708 20:29:29.490595   43874 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0708 20:29:29.490604   43874 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0708 20:29:29.490612   43874 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0708 20:29:29.490620   43874 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0708 20:29:29.490627   43874 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0708 20:29:29.490634   43874 command_runner.go:130] > # This option supports live configuration reload.
	I0708 20:29:29.490638   43874 command_runner.go:130] > # rdt_config_file = ""
	I0708 20:29:29.490644   43874 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0708 20:29:29.490650   43874 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0708 20:29:29.490681   43874 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0708 20:29:29.490687   43874 command_runner.go:130] > # separate_pull_cgroup = ""
	I0708 20:29:29.490694   43874 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0708 20:29:29.490702   43874 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0708 20:29:29.490709   43874 command_runner.go:130] > # will be added.
	I0708 20:29:29.490716   43874 command_runner.go:130] > # default_capabilities = [
	I0708 20:29:29.490724   43874 command_runner.go:130] > # 	"CHOWN",
	I0708 20:29:29.490729   43874 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0708 20:29:29.490738   43874 command_runner.go:130] > # 	"FSETID",
	I0708 20:29:29.490746   43874 command_runner.go:130] > # 	"FOWNER",
	I0708 20:29:29.490755   43874 command_runner.go:130] > # 	"SETGID",
	I0708 20:29:29.490761   43874 command_runner.go:130] > # 	"SETUID",
	I0708 20:29:29.490769   43874 command_runner.go:130] > # 	"SETPCAP",
	I0708 20:29:29.490775   43874 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0708 20:29:29.490798   43874 command_runner.go:130] > # 	"KILL",
	I0708 20:29:29.490805   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490812   43874 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0708 20:29:29.490821   43874 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0708 20:29:29.490827   43874 command_runner.go:130] > # add_inheritable_capabilities = false
	I0708 20:29:29.490834   43874 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0708 20:29:29.490842   43874 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0708 20:29:29.490847   43874 command_runner.go:130] > default_sysctls = [
	I0708 20:29:29.490853   43874 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0708 20:29:29.490859   43874 command_runner.go:130] > ]
	I0708 20:29:29.490865   43874 command_runner.go:130] > # List of devices on the host that a
	I0708 20:29:29.490873   43874 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0708 20:29:29.490880   43874 command_runner.go:130] > # allowed_devices = [
	I0708 20:29:29.490888   43874 command_runner.go:130] > # 	"/dev/fuse",
	I0708 20:29:29.490893   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490898   43874 command_runner.go:130] > # List of additional devices. specified as
	I0708 20:29:29.490905   43874 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0708 20:29:29.490913   43874 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0708 20:29:29.490918   43874 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0708 20:29:29.490924   43874 command_runner.go:130] > # additional_devices = [
	I0708 20:29:29.490928   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490933   43874 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0708 20:29:29.490938   43874 command_runner.go:130] > # cdi_spec_dirs = [
	I0708 20:29:29.490941   43874 command_runner.go:130] > # 	"/etc/cdi",
	I0708 20:29:29.490946   43874 command_runner.go:130] > # 	"/var/run/cdi",
	I0708 20:29:29.490949   43874 command_runner.go:130] > # ]
	I0708 20:29:29.490955   43874 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0708 20:29:29.490963   43874 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0708 20:29:29.490969   43874 command_runner.go:130] > # Defaults to false.
	I0708 20:29:29.490974   43874 command_runner.go:130] > # device_ownership_from_security_context = false
	I0708 20:29:29.490981   43874 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0708 20:29:29.490987   43874 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0708 20:29:29.490992   43874 command_runner.go:130] > # hooks_dir = [
	I0708 20:29:29.490997   43874 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0708 20:29:29.491002   43874 command_runner.go:130] > # ]
	I0708 20:29:29.491008   43874 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0708 20:29:29.491016   43874 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0708 20:29:29.491023   43874 command_runner.go:130] > # its default mounts from the following two files:
	I0708 20:29:29.491027   43874 command_runner.go:130] > #
	I0708 20:29:29.491033   43874 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0708 20:29:29.491040   43874 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0708 20:29:29.491047   43874 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0708 20:29:29.491051   43874 command_runner.go:130] > #
	I0708 20:29:29.491057   43874 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0708 20:29:29.491065   43874 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0708 20:29:29.491072   43874 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0708 20:29:29.491082   43874 command_runner.go:130] > #      only add mounts it finds in this file.
	I0708 20:29:29.491088   43874 command_runner.go:130] > #
	I0708 20:29:29.491092   43874 command_runner.go:130] > # default_mounts_file = ""
	I0708 20:29:29.491104   43874 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0708 20:29:29.491117   43874 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0708 20:29:29.491123   43874 command_runner.go:130] > pids_limit = 1024
	I0708 20:29:29.491130   43874 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0708 20:29:29.491138   43874 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0708 20:29:29.491143   43874 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0708 20:29:29.491153   43874 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0708 20:29:29.491157   43874 command_runner.go:130] > # log_size_max = -1
	I0708 20:29:29.491164   43874 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0708 20:29:29.491170   43874 command_runner.go:130] > # log_to_journald = false
	I0708 20:29:29.491176   43874 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0708 20:29:29.491183   43874 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0708 20:29:29.491188   43874 command_runner.go:130] > # Path to directory for container attach sockets.
	I0708 20:29:29.491195   43874 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0708 20:29:29.491203   43874 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0708 20:29:29.491209   43874 command_runner.go:130] > # bind_mount_prefix = ""
	I0708 20:29:29.491214   43874 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0708 20:29:29.491220   43874 command_runner.go:130] > # read_only = false
	I0708 20:29:29.491226   43874 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0708 20:29:29.491234   43874 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0708 20:29:29.491238   43874 command_runner.go:130] > # live configuration reload.
	I0708 20:29:29.491244   43874 command_runner.go:130] > # log_level = "info"
	I0708 20:29:29.491250   43874 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0708 20:29:29.491257   43874 command_runner.go:130] > # This option supports live configuration reload.
	I0708 20:29:29.491261   43874 command_runner.go:130] > # log_filter = ""
	I0708 20:29:29.491267   43874 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0708 20:29:29.491281   43874 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0708 20:29:29.491287   43874 command_runner.go:130] > # separated by comma.
	I0708 20:29:29.491295   43874 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0708 20:29:29.491301   43874 command_runner.go:130] > # uid_mappings = ""
	I0708 20:29:29.491306   43874 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0708 20:29:29.491314   43874 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0708 20:29:29.491318   43874 command_runner.go:130] > # separated by comma.
	I0708 20:29:29.491326   43874 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0708 20:29:29.491332   43874 command_runner.go:130] > # gid_mappings = ""
	I0708 20:29:29.491337   43874 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0708 20:29:29.491349   43874 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0708 20:29:29.491357   43874 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0708 20:29:29.491367   43874 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0708 20:29:29.491373   43874 command_runner.go:130] > # minimum_mappable_uid = -1
	I0708 20:29:29.491379   43874 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0708 20:29:29.491387   43874 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0708 20:29:29.491395   43874 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0708 20:29:29.491402   43874 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0708 20:29:29.491408   43874 command_runner.go:130] > # minimum_mappable_gid = -1
	I0708 20:29:29.491413   43874 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0708 20:29:29.491421   43874 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0708 20:29:29.491427   43874 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0708 20:29:29.491433   43874 command_runner.go:130] > # ctr_stop_timeout = 30
	I0708 20:29:29.491439   43874 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0708 20:29:29.491446   43874 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0708 20:29:29.491470   43874 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0708 20:29:29.491478   43874 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0708 20:29:29.491486   43874 command_runner.go:130] > drop_infra_ctr = false
	I0708 20:29:29.491491   43874 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0708 20:29:29.491499   43874 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0708 20:29:29.491508   43874 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0708 20:29:29.491514   43874 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0708 20:29:29.491521   43874 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0708 20:29:29.491528   43874 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0708 20:29:29.491533   43874 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0708 20:29:29.491540   43874 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0708 20:29:29.491544   43874 command_runner.go:130] > # shared_cpuset = ""
	I0708 20:29:29.491552   43874 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0708 20:29:29.491557   43874 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0708 20:29:29.491563   43874 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0708 20:29:29.491570   43874 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0708 20:29:29.491576   43874 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0708 20:29:29.491581   43874 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0708 20:29:29.491589   43874 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0708 20:29:29.491595   43874 command_runner.go:130] > # enable_criu_support = false
	I0708 20:29:29.491599   43874 command_runner.go:130] > # Enable/disable the generation of the container,
	I0708 20:29:29.491614   43874 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0708 20:29:29.491620   43874 command_runner.go:130] > # enable_pod_events = false
	I0708 20:29:29.491626   43874 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0708 20:29:29.491634   43874 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0708 20:29:29.491639   43874 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0708 20:29:29.491646   43874 command_runner.go:130] > # default_runtime = "runc"
	I0708 20:29:29.491650   43874 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0708 20:29:29.491659   43874 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0708 20:29:29.491670   43874 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0708 20:29:29.491677   43874 command_runner.go:130] > # creation as a file is not desired either.
	I0708 20:29:29.491685   43874 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0708 20:29:29.491692   43874 command_runner.go:130] > # the hostname is being managed dynamically.
	I0708 20:29:29.491696   43874 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0708 20:29:29.491701   43874 command_runner.go:130] > # ]
	I0708 20:29:29.491711   43874 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0708 20:29:29.491723   43874 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0708 20:29:29.491734   43874 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0708 20:29:29.491744   43874 command_runner.go:130] > # Each entry in the table should follow the format:
	I0708 20:29:29.491752   43874 command_runner.go:130] > #
	I0708 20:29:29.491759   43874 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0708 20:29:29.491769   43874 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0708 20:29:29.491825   43874 command_runner.go:130] > # runtime_type = "oci"
	I0708 20:29:29.491834   43874 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0708 20:29:29.491838   43874 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0708 20:29:29.491842   43874 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0708 20:29:29.491847   43874 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0708 20:29:29.491850   43874 command_runner.go:130] > # monitor_env = []
	I0708 20:29:29.491855   43874 command_runner.go:130] > # privileged_without_host_devices = false
	I0708 20:29:29.491859   43874 command_runner.go:130] > # allowed_annotations = []
	I0708 20:29:29.491865   43874 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0708 20:29:29.491869   43874 command_runner.go:130] > # Where:
	I0708 20:29:29.491877   43874 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0708 20:29:29.491882   43874 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0708 20:29:29.491891   43874 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0708 20:29:29.491899   43874 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0708 20:29:29.491905   43874 command_runner.go:130] > #   in $PATH.
	I0708 20:29:29.491915   43874 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0708 20:29:29.491923   43874 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0708 20:29:29.491931   43874 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0708 20:29:29.491936   43874 command_runner.go:130] > #   state.
	I0708 20:29:29.491942   43874 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0708 20:29:29.491950   43874 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0708 20:29:29.491957   43874 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0708 20:29:29.491964   43874 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0708 20:29:29.491972   43874 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0708 20:29:29.491981   43874 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0708 20:29:29.491987   43874 command_runner.go:130] > #   The currently recognized values are:
	I0708 20:29:29.491994   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0708 20:29:29.492003   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0708 20:29:29.492011   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0708 20:29:29.492017   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0708 20:29:29.492027   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0708 20:29:29.492036   43874 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0708 20:29:29.492042   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0708 20:29:29.492050   43874 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0708 20:29:29.492055   43874 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0708 20:29:29.492063   43874 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0708 20:29:29.492069   43874 command_runner.go:130] > #   deprecated option "conmon".
	I0708 20:29:29.492076   43874 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0708 20:29:29.492083   43874 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0708 20:29:29.492089   43874 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0708 20:29:29.492096   43874 command_runner.go:130] > #   should be moved to the container's cgroup
	I0708 20:29:29.492102   43874 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0708 20:29:29.492109   43874 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0708 20:29:29.492121   43874 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0708 20:29:29.492126   43874 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0708 20:29:29.492131   43874 command_runner.go:130] > #
	I0708 20:29:29.492136   43874 command_runner.go:130] > # Using the seccomp notifier feature:
	I0708 20:29:29.492142   43874 command_runner.go:130] > #
	I0708 20:29:29.492147   43874 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0708 20:29:29.492158   43874 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0708 20:29:29.492163   43874 command_runner.go:130] > #
	I0708 20:29:29.492175   43874 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0708 20:29:29.492183   43874 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0708 20:29:29.492187   43874 command_runner.go:130] > #
	I0708 20:29:29.492193   43874 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0708 20:29:29.492198   43874 command_runner.go:130] > # feature.
	I0708 20:29:29.492202   43874 command_runner.go:130] > #
	I0708 20:29:29.492208   43874 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0708 20:29:29.492216   43874 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0708 20:29:29.492224   43874 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0708 20:29:29.492229   43874 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0708 20:29:29.492237   43874 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0708 20:29:29.492242   43874 command_runner.go:130] > #
	I0708 20:29:29.492247   43874 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0708 20:29:29.492255   43874 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0708 20:29:29.492259   43874 command_runner.go:130] > #
	I0708 20:29:29.492265   43874 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0708 20:29:29.492272   43874 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0708 20:29:29.492275   43874 command_runner.go:130] > #
	I0708 20:29:29.492281   43874 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0708 20:29:29.492288   43874 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0708 20:29:29.492292   43874 command_runner.go:130] > # limitation.
	I0708 20:29:29.492298   43874 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0708 20:29:29.492305   43874 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0708 20:29:29.492309   43874 command_runner.go:130] > runtime_type = "oci"
	I0708 20:29:29.492314   43874 command_runner.go:130] > runtime_root = "/run/runc"
	I0708 20:29:29.492318   43874 command_runner.go:130] > runtime_config_path = ""
	I0708 20:29:29.492322   43874 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0708 20:29:29.492326   43874 command_runner.go:130] > monitor_cgroup = "pod"
	I0708 20:29:29.492332   43874 command_runner.go:130] > monitor_exec_cgroup = ""
	I0708 20:29:29.492336   43874 command_runner.go:130] > monitor_env = [
	I0708 20:29:29.492344   43874 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0708 20:29:29.492349   43874 command_runner.go:130] > ]
	I0708 20:29:29.492353   43874 command_runner.go:130] > privileged_without_host_devices = false
	I0708 20:29:29.492362   43874 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0708 20:29:29.492367   43874 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0708 20:29:29.492375   43874 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0708 20:29:29.492388   43874 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0708 20:29:29.492398   43874 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0708 20:29:29.492405   43874 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0708 20:29:29.492414   43874 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0708 20:29:29.492423   43874 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0708 20:29:29.492431   43874 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0708 20:29:29.492437   43874 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0708 20:29:29.492442   43874 command_runner.go:130] > # Example:
	I0708 20:29:29.492447   43874 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0708 20:29:29.492453   43874 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0708 20:29:29.492458   43874 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0708 20:29:29.492465   43874 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0708 20:29:29.492468   43874 command_runner.go:130] > # cpuset = 0
	I0708 20:29:29.492474   43874 command_runner.go:130] > # cpushares = "0-1"
	I0708 20:29:29.492478   43874 command_runner.go:130] > # Where:
	I0708 20:29:29.492484   43874 command_runner.go:130] > # The workload name is workload-type.
	I0708 20:29:29.492491   43874 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0708 20:29:29.492499   43874 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0708 20:29:29.492507   43874 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0708 20:29:29.492514   43874 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0708 20:29:29.492521   43874 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0708 20:29:29.492526   43874 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0708 20:29:29.492535   43874 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0708 20:29:29.492545   43874 command_runner.go:130] > # Default value is set to true
	I0708 20:29:29.492552   43874 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0708 20:29:29.492557   43874 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0708 20:29:29.492564   43874 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0708 20:29:29.492568   43874 command_runner.go:130] > # Default value is set to 'false'
	I0708 20:29:29.492574   43874 command_runner.go:130] > # disable_hostport_mapping = false
	I0708 20:29:29.492580   43874 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0708 20:29:29.492584   43874 command_runner.go:130] > #
	I0708 20:29:29.492589   43874 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0708 20:29:29.492595   43874 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0708 20:29:29.492600   43874 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0708 20:29:29.492606   43874 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0708 20:29:29.492611   43874 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0708 20:29:29.492619   43874 command_runner.go:130] > [crio.image]
	I0708 20:29:29.492624   43874 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0708 20:29:29.492628   43874 command_runner.go:130] > # default_transport = "docker://"
	I0708 20:29:29.492634   43874 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0708 20:29:29.492639   43874 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0708 20:29:29.492643   43874 command_runner.go:130] > # global_auth_file = ""
	I0708 20:29:29.492647   43874 command_runner.go:130] > # The image used to instantiate infra containers.
	I0708 20:29:29.492652   43874 command_runner.go:130] > # This option supports live configuration reload.
	I0708 20:29:29.492656   43874 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0708 20:29:29.492661   43874 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0708 20:29:29.492667   43874 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0708 20:29:29.492671   43874 command_runner.go:130] > # This option supports live configuration reload.
	I0708 20:29:29.492675   43874 command_runner.go:130] > # pause_image_auth_file = ""
	I0708 20:29:29.492680   43874 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0708 20:29:29.492686   43874 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0708 20:29:29.492692   43874 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0708 20:29:29.492697   43874 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0708 20:29:29.492700   43874 command_runner.go:130] > # pause_command = "/pause"
	I0708 20:29:29.492708   43874 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0708 20:29:29.492716   43874 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0708 20:29:29.492729   43874 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0708 20:29:29.492740   43874 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0708 20:29:29.492748   43874 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0708 20:29:29.492757   43874 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0708 20:29:29.492763   43874 command_runner.go:130] > # pinned_images = [
	I0708 20:29:29.492767   43874 command_runner.go:130] > # ]
	I0708 20:29:29.492774   43874 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0708 20:29:29.492784   43874 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0708 20:29:29.492796   43874 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0708 20:29:29.492806   43874 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0708 20:29:29.492817   43874 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0708 20:29:29.492826   43874 command_runner.go:130] > # signature_policy = ""
	I0708 20:29:29.492837   43874 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0708 20:29:29.492848   43874 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0708 20:29:29.492858   43874 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0708 20:29:29.492867   43874 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0708 20:29:29.492881   43874 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0708 20:29:29.492889   43874 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0708 20:29:29.492897   43874 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0708 20:29:29.492905   43874 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0708 20:29:29.492911   43874 command_runner.go:130] > # changing them here.
	I0708 20:29:29.492916   43874 command_runner.go:130] > # insecure_registries = [
	I0708 20:29:29.492920   43874 command_runner.go:130] > # ]
	I0708 20:29:29.492926   43874 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0708 20:29:29.492934   43874 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0708 20:29:29.492938   43874 command_runner.go:130] > # image_volumes = "mkdir"
	I0708 20:29:29.492945   43874 command_runner.go:130] > # Temporary directory to use for storing big files
	I0708 20:29:29.492949   43874 command_runner.go:130] > # big_files_temporary_dir = ""
	I0708 20:29:29.492957   43874 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0708 20:29:29.492963   43874 command_runner.go:130] > # CNI plugins.
	I0708 20:29:29.492967   43874 command_runner.go:130] > [crio.network]
	I0708 20:29:29.492974   43874 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0708 20:29:29.492980   43874 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0708 20:29:29.492985   43874 command_runner.go:130] > # cni_default_network = ""
	I0708 20:29:29.492990   43874 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0708 20:29:29.492997   43874 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0708 20:29:29.493002   43874 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0708 20:29:29.493008   43874 command_runner.go:130] > # plugin_dirs = [
	I0708 20:29:29.493012   43874 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0708 20:29:29.493017   43874 command_runner.go:130] > # ]
	I0708 20:29:29.493023   43874 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0708 20:29:29.493029   43874 command_runner.go:130] > [crio.metrics]
	I0708 20:29:29.493038   43874 command_runner.go:130] > # Globally enable or disable metrics support.
	I0708 20:29:29.493044   43874 command_runner.go:130] > enable_metrics = true
	I0708 20:29:29.493049   43874 command_runner.go:130] > # Specify enabled metrics collectors.
	I0708 20:29:29.493055   43874 command_runner.go:130] > # Per default all metrics are enabled.
	I0708 20:29:29.493061   43874 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0708 20:29:29.493069   43874 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0708 20:29:29.493077   43874 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0708 20:29:29.493081   43874 command_runner.go:130] > # metrics_collectors = [
	I0708 20:29:29.493085   43874 command_runner.go:130] > # 	"operations",
	I0708 20:29:29.493092   43874 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0708 20:29:29.493100   43874 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0708 20:29:29.493107   43874 command_runner.go:130] > # 	"operations_errors",
	I0708 20:29:29.493114   43874 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0708 20:29:29.493120   43874 command_runner.go:130] > # 	"image_pulls_by_name",
	I0708 20:29:29.493125   43874 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0708 20:29:29.493131   43874 command_runner.go:130] > # 	"image_pulls_failures",
	I0708 20:29:29.493135   43874 command_runner.go:130] > # 	"image_pulls_successes",
	I0708 20:29:29.493139   43874 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0708 20:29:29.493143   43874 command_runner.go:130] > # 	"image_layer_reuse",
	I0708 20:29:29.493150   43874 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0708 20:29:29.493154   43874 command_runner.go:130] > # 	"containers_oom_total",
	I0708 20:29:29.493159   43874 command_runner.go:130] > # 	"containers_oom",
	I0708 20:29:29.493163   43874 command_runner.go:130] > # 	"processes_defunct",
	I0708 20:29:29.493167   43874 command_runner.go:130] > # 	"operations_total",
	I0708 20:29:29.493173   43874 command_runner.go:130] > # 	"operations_latency_seconds",
	I0708 20:29:29.493177   43874 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0708 20:29:29.493183   43874 command_runner.go:130] > # 	"operations_errors_total",
	I0708 20:29:29.493188   43874 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0708 20:29:29.493195   43874 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0708 20:29:29.493199   43874 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0708 20:29:29.493204   43874 command_runner.go:130] > # 	"image_pulls_success_total",
	I0708 20:29:29.493208   43874 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0708 20:29:29.493213   43874 command_runner.go:130] > # 	"containers_oom_count_total",
	I0708 20:29:29.493219   43874 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0708 20:29:29.493224   43874 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0708 20:29:29.493229   43874 command_runner.go:130] > # ]
	I0708 20:29:29.493233   43874 command_runner.go:130] > # The port on which the metrics server will listen.
	I0708 20:29:29.493239   43874 command_runner.go:130] > # metrics_port = 9090
	I0708 20:29:29.493244   43874 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0708 20:29:29.493250   43874 command_runner.go:130] > # metrics_socket = ""
	I0708 20:29:29.493255   43874 command_runner.go:130] > # The certificate for the secure metrics server.
	I0708 20:29:29.493263   43874 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0708 20:29:29.493272   43874 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0708 20:29:29.493278   43874 command_runner.go:130] > # certificate on any modification event.
	I0708 20:29:29.493282   43874 command_runner.go:130] > # metrics_cert = ""
	I0708 20:29:29.493289   43874 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0708 20:29:29.493299   43874 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0708 20:29:29.493311   43874 command_runner.go:130] > # metrics_key = ""
	I0708 20:29:29.493317   43874 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0708 20:29:29.493323   43874 command_runner.go:130] > [crio.tracing]
	I0708 20:29:29.493328   43874 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0708 20:29:29.493334   43874 command_runner.go:130] > # enable_tracing = false
	I0708 20:29:29.493340   43874 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0708 20:29:29.493346   43874 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0708 20:29:29.493353   43874 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0708 20:29:29.493359   43874 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0708 20:29:29.493364   43874 command_runner.go:130] > # CRI-O NRI configuration.
	I0708 20:29:29.493369   43874 command_runner.go:130] > [crio.nri]
	I0708 20:29:29.493374   43874 command_runner.go:130] > # Globally enable or disable NRI.
	I0708 20:29:29.493380   43874 command_runner.go:130] > # enable_nri = false
	I0708 20:29:29.493384   43874 command_runner.go:130] > # NRI socket to listen on.
	I0708 20:29:29.493391   43874 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0708 20:29:29.493395   43874 command_runner.go:130] > # NRI plugin directory to use.
	I0708 20:29:29.493400   43874 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0708 20:29:29.493405   43874 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0708 20:29:29.493411   43874 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0708 20:29:29.493417   43874 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0708 20:29:29.493423   43874 command_runner.go:130] > # nri_disable_connections = false
	I0708 20:29:29.493428   43874 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0708 20:29:29.493435   43874 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0708 20:29:29.493440   43874 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0708 20:29:29.493447   43874 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0708 20:29:29.493453   43874 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0708 20:29:29.493458   43874 command_runner.go:130] > [crio.stats]
	I0708 20:29:29.493464   43874 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0708 20:29:29.493471   43874 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0708 20:29:29.493475   43874 command_runner.go:130] > # stats_collection_period = 0
	I0708 20:29:29.495354   43874 cni.go:84] Creating CNI manager for ""
	I0708 20:29:29.495389   43874 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0708 20:29:29.495401   43874 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:29:29.495427   43874 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.44 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-957088 NodeName:multinode-957088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:29:29.495567   43874 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-957088"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:29:29.495626   43874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:29:29.507591   43874 command_runner.go:130] > kubeadm
	I0708 20:29:29.507612   43874 command_runner.go:130] > kubectl
	I0708 20:29:29.507616   43874 command_runner.go:130] > kubelet
	I0708 20:29:29.507635   43874 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:29:29.507690   43874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:29:29.517519   43874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0708 20:29:29.535779   43874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:29:29.553051   43874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0708 20:29:29.569824   43874 ssh_runner.go:195] Run: grep 192.168.39.44	control-plane.minikube.internal$ /etc/hosts
	I0708 20:29:29.573924   43874 command_runner.go:130] > 192.168.39.44	control-plane.minikube.internal
	I0708 20:29:29.574109   43874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:29:29.714266   43874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:29:29.729309   43874 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088 for IP: 192.168.39.44
	I0708 20:29:29.729331   43874 certs.go:194] generating shared ca certs ...
	I0708 20:29:29.729346   43874 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:29:29.729515   43874 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:29:29.729565   43874 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:29:29.729578   43874 certs.go:256] generating profile certs ...
	I0708 20:29:29.729688   43874 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/client.key
	I0708 20:29:29.729762   43874 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/apiserver.key.49267aaa
	I0708 20:29:29.729805   43874 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/proxy-client.key
	I0708 20:29:29.729817   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 20:29:29.729836   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 20:29:29.729852   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 20:29:29.729869   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 20:29:29.729894   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 20:29:29.729938   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 20:29:29.729963   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 20:29:29.729978   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 20:29:29.730042   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:29:29.730079   43874 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:29:29.730092   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:29:29.730127   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:29:29.730154   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:29:29.730188   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:29:29.730243   43874 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:29:29.730280   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> /usr/share/ca-certificates/131412.pem
	I0708 20:29:29.730299   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:29:29.730315   43874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem -> /usr/share/ca-certificates/13141.pem
	I0708 20:29:29.731168   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:29:29.757569   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:29:29.782884   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:29:29.808319   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:29:29.833142   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 20:29:29.857192   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:29:29.881095   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:29:29.906977   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/multinode-957088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 20:29:29.932865   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:29:29.959068   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:29:29.983782   43874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:29:30.010206   43874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:29:30.028193   43874 ssh_runner.go:195] Run: openssl version
	I0708 20:29:30.035067   43874 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0708 20:29:30.035149   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:29:30.046393   43874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:29:30.051313   43874 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:29:30.051351   43874 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:29:30.051396   43874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:29:30.057303   43874 command_runner.go:130] > 3ec20f2e
	I0708 20:29:30.057452   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:29:30.066972   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:29:30.077966   43874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:29:30.082675   43874 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:29:30.082709   43874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:29:30.082759   43874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:29:30.089011   43874 command_runner.go:130] > b5213941
	I0708 20:29:30.089111   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:29:30.098808   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:29:30.110523   43874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:29:30.115178   43874 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:29:30.115237   43874 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:29:30.115295   43874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:29:30.121010   43874 command_runner.go:130] > 51391683
	I0708 20:29:30.121202   43874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:29:30.130965   43874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:29:30.135669   43874 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:29:30.135696   43874 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0708 20:29:30.135704   43874 command_runner.go:130] > Device: 253,1	Inode: 5245461     Links: 1
	I0708 20:29:30.135713   43874 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0708 20:29:30.135724   43874 command_runner.go:130] > Access: 2024-07-08 20:23:20.376674544 +0000
	I0708 20:29:30.135735   43874 command_runner.go:130] > Modify: 2024-07-08 20:23:20.376674544 +0000
	I0708 20:29:30.135744   43874 command_runner.go:130] > Change: 2024-07-08 20:23:20.376674544 +0000
	I0708 20:29:30.135759   43874 command_runner.go:130] >  Birth: 2024-07-08 20:23:20.376674544 +0000
	I0708 20:29:30.135925   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:29:30.141861   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.142148   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:29:30.148147   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.148245   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:29:30.154162   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.154427   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:29:30.160352   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.160502   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:29:30.166283   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.166450   43874 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:29:30.172270   43874 command_runner.go:130] > Certificate will not expire
	I0708 20:29:30.172412   43874 kubeadm.go:391] StartCluster: {Name:multinode-957088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-957088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:29:30.172559   43874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:29:30.172640   43874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:29:30.210259   43874 command_runner.go:130] > baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c
	I0708 20:29:30.210292   43874 command_runner.go:130] > c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac
	I0708 20:29:30.210300   43874 command_runner.go:130] > eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4
	I0708 20:29:30.210309   43874 command_runner.go:130] > 5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c
	I0708 20:29:30.210317   43874 command_runner.go:130] > 8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375
	I0708 20:29:30.210326   43874 command_runner.go:130] > 7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507
	I0708 20:29:30.210336   43874 command_runner.go:130] > 3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db
	I0708 20:29:30.210345   43874 command_runner.go:130] > bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024
	I0708 20:29:30.210373   43874 cri.go:89] found id: "baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c"
	I0708 20:29:30.210381   43874 cri.go:89] found id: "c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac"
	I0708 20:29:30.210384   43874 cri.go:89] found id: "eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4"
	I0708 20:29:30.210387   43874 cri.go:89] found id: "5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c"
	I0708 20:29:30.210390   43874 cri.go:89] found id: "8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375"
	I0708 20:29:30.210393   43874 cri.go:89] found id: "7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507"
	I0708 20:29:30.210396   43874 cri.go:89] found id: "3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db"
	I0708 20:29:30.210399   43874 cri.go:89] found id: "bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024"
	I0708 20:29:30.210401   43874 cri.go:89] found id: ""
	I0708 20:29:30.210440   43874 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.865152073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7fc63db8-a307-4ee3-8e72-a0a94352865b name=/runtime.v1.RuntimeService/Version
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.866777796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32ec8c6f-3278-409c-a90e-ea9016127e9e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.867210479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720470803867184871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32ec8c6f-3278-409c-a90e-ea9016127e9e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.867711066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=091fa77d-3a68-4bfd-9edf-d4937b8b1858 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.868014575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=091fa77d-3a68-4bfd-9edf-d4937b8b1858 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.868658210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ea54c73e0f3726c901e14075c8f0809e8b173d25d9c91ce9d4ed2ff869e6062,PodSandboxId:6eb67e95826c021b12fa109d69ab787a87dd8a5871d50576c24982eaf6b0b807,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720470610178566973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76806f8a013ba1f2a9c54c275f108e7e849ffecce0b458befb76019314ca14d4,PodSandboxId:3af269b4aabae5c79730c4b4dbbbabdcf48d9f1ebba9c2add8e02e19219818ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720470576688646543,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54174b10cb5183999bad08287b0a89acebbfac005a775ceb383a4c23ce3412ac,PodSandboxId:e8e3fa51b35ad30cc477a592d8f09444768ccb4f87ad54e76a1422a60e8ae36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720470576691111320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546831c23c80e430aaab6e2a857e677f729f9290a275710847b09a7e355390e2,PodSandboxId:d733ea97b0533e3b2e08e9b2a913ee764189aafa0e159f7445c83ec05acb852d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720470576419730602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-d84244caf4a9,},Annotations:map[string]
string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5ecc0492b2c2f6027a891fdd6f93fdf7ef1cdded7ba8958191fdaeb2796517,PodSandboxId:c9b6d5d65f23ea51f1eb7acf065a1a27a735adfd72daef063db3832f9aa1942f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720470576435358325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.ku
bernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b516f0a686a5925ebc0bd4ea92a8b6383cf03e4469d7478996644bdea1e54bb,PodSandboxId:07a085bb954d4cbb5a5d1f6aab4fc0055cc0e42f8ca06aa7ae168fd6b3ae6f40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720470572669136148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5da15967827256185eb2546419913d851533e4e51e34d1f698de18415004dda,PodSandboxId:0ccb3568f0163fae07ca185ea0b7c8845d5822bff693b7b83af8c810ac2979bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720470572616418019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2951ca64535e2caa6003d7f7a75347625c078667561b7d1e59372f1df3eba911,PodSandboxId:6d36bac90520e3b1e53aaf308dcf46f20a2162e1c17121cd653c18cf4f0b7d6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720470572569974658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ac3f4ee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d1d879b7776b5cfc71dcaee948a028e4a0628fbb3c661104ea24a5e1de9a58,PodSandboxId:18af6c77652eaf852d32c08b1f452ebcb57d868aed733e97287c3c80b91a45a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720470572525342990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3a3b62e86e9a99cb9815651f876e76dc01fece2f3da4a883d24618d81d3df8,PodSandboxId:45daa79761639627232cb3faa9c11617d117aa5dc666dc134c89d04f8b4b77d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720470268406216186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c,PodSandboxId:d198d3e471da431c3023870c9d69519f87234f13cb13c3665bec4f8611ea0f09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720470225282207533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac,PodSandboxId:193c64f1ecc6a73d51c1762d70d307d30e2b434826143db013f1d44dddaca78e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720470224861704699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4,PodSandboxId:28bf5d2a49ccf088e781b2e0279eadf5d7b010921a8be7b053994a391c6c2e9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720470223366468421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c,PodSandboxId:d93dd4e73641f5652616875d582d89397e9f6498ab6011daf92d7734aca83bde,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720470223208155438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-
d84244caf4a9,},Annotations:map[string]string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507,PodSandboxId:5a4433da8c657a6516644819f9fb27a5b949cbd2a194ca36cae94e87a58589bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720470203714068571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{
io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375,PodSandboxId:0fc745b8ee3be213a585f87aa31799a7a86a5df9b91557bf723514cbac0709ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720470203773386860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db,PodSandboxId:80bae309ed5a22feb2eac1649026ca650831da62c3c1a44d119edb2b7ce40bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720470203669705068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024,PodSandboxId:d02b3fe8a7e16c5369682d53bb8df678bc4f28ed1bb7d846398c856dd394c579,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720470203639895629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: ac3f4ee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=091fa77d-3a68-4bfd-9edf-d4937b8b1858 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.908811191Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be28a0c2-ffa2-4c72-bd9f-f5df65b7c006 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.909107708Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6eb67e95826c021b12fa109d69ab787a87dd8a5871d50576c24982eaf6b0b807,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-fqkrd,Uid:c920ac4a-fa2f-4e6a-a937-650806f738ad,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1720470609996107726,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-08T20:29:35.842197848Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8e3fa51b35ad30cc477a592d8f09444768ccb4f87ad54e76a1422a60e8ae36c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-v92sb,Uid:26175213-712f-41f8-b39b-ba4691346d29,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1720470576248693189,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-08T20:29:35.842205132Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9b6d5d65f23ea51f1eb7acf065a1a27a735adfd72daef063db3832f9aa1942f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4a1bfd34-af18-42f9-92c6-e5a902ca9229,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1720470576204432242,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-08T20:29:35.842204095Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3af269b4aabae5c79730c4b4dbbbabdcf48d9f1ebba9c2add8e02e19219818ab,Metadata:&PodSandboxMetadata{Name:kindnet-9t7dr,Uid:26461f24-d94c-4eaa-bfa7-0633c4c556e8,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1720470576202844188,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-08T20:29:35.842199317Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d733ea97b0533e3b2e08e9b2a913ee764189aafa0e159f7445c83ec05acb852d,Metadata:&PodSandboxMetadata{Name:kube-proxy-gfhs4,Uid:804d9347-ea15-4821-819b-d84244caf4a9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1720470576189495243,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-d84244caf4a9,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-08T20:29:35.842201980Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:07a085bb954d4cbb5a5d1f6aab4fc0055cc0e42f8ca06aa7ae168fd6b3ae6f40,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-957088,Uid:3698a636478babda3b4701b1de6df763,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1720470572358833613,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3698a636478babda3b4701b1de6df763,kubernetes.io/config.seen: 2024-07-08T20:29:31.846827444Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:18af6c77652eaf852d32c08b1f452ebcb57d868aed733e97287c3c80b91a45a3,Metadata:&PodSandboxMetadata{Name:etcd-multinode-95708
8,Uid:c7064efee6c16d289f49531b6c5b5476,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1720470572357531972,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.44:2379,kubernetes.io/config.hash: c7064efee6c16d289f49531b6c5b5476,kubernetes.io/config.seen: 2024-07-08T20:29:31.846820885Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6d36bac90520e3b1e53aaf308dcf46f20a2162e1c17121cd653c18cf4f0b7d6b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-957088,Uid:c619af189d17108f8498ce8aa729508e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1720470572355889588,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-a
piserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.44:8443,kubernetes.io/config.hash: c619af189d17108f8498ce8aa729508e,kubernetes.io/config.seen: 2024-07-08T20:29:31.846825117Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ccb3568f0163fae07ca185ea0b7c8845d5822bff693b7b83af8c810ac2979bb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-957088,Uid:4fc7c87d6ce269f042dc0b09281452ab,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1720470572338777449,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,tier: control-plane,},Annotations:map[string]string{kubernete
s.io/config.hash: 4fc7c87d6ce269f042dc0b09281452ab,kubernetes.io/config.seen: 2024-07-08T20:29:31.846826338Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=be28a0c2-ffa2-4c72-bd9f-f5df65b7c006 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.909929969Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c6249f6-bc62-458c-b9b6-f47a64cb6519 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.910017694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c6249f6-bc62-458c-b9b6-f47a64cb6519 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.910238059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ea54c73e0f3726c901e14075c8f0809e8b173d25d9c91ce9d4ed2ff869e6062,PodSandboxId:6eb67e95826c021b12fa109d69ab787a87dd8a5871d50576c24982eaf6b0b807,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720470610178566973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76806f8a013ba1f2a9c54c275f108e7e849ffecce0b458befb76019314ca14d4,PodSandboxId:3af269b4aabae5c79730c4b4dbbbabdcf48d9f1ebba9c2add8e02e19219818ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720470576688646543,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54174b10cb5183999bad08287b0a89acebbfac005a775ceb383a4c23ce3412ac,PodSandboxId:e8e3fa51b35ad30cc477a592d8f09444768ccb4f87ad54e76a1422a60e8ae36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720470576691111320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546831c23c80e430aaab6e2a857e677f729f9290a275710847b09a7e355390e2,PodSandboxId:d733ea97b0533e3b2e08e9b2a913ee764189aafa0e159f7445c83ec05acb852d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720470576419730602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-d84244caf4a9,},Annotations:map[string]
string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5ecc0492b2c2f6027a891fdd6f93fdf7ef1cdded7ba8958191fdaeb2796517,PodSandboxId:c9b6d5d65f23ea51f1eb7acf065a1a27a735adfd72daef063db3832f9aa1942f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720470576435358325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.ku
bernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b516f0a686a5925ebc0bd4ea92a8b6383cf03e4469d7478996644bdea1e54bb,PodSandboxId:07a085bb954d4cbb5a5d1f6aab4fc0055cc0e42f8ca06aa7ae168fd6b3ae6f40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720470572669136148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5da15967827256185eb2546419913d851533e4e51e34d1f698de18415004dda,PodSandboxId:0ccb3568f0163fae07ca185ea0b7c8845d5822bff693b7b83af8c810ac2979bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720470572616418019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2951ca64535e2caa6003d7f7a75347625c078667561b7d1e59372f1df3eba911,PodSandboxId:6d36bac90520e3b1e53aaf308dcf46f20a2162e1c17121cd653c18cf4f0b7d6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720470572569974658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ac3f4ee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d1d879b7776b5cfc71dcaee948a028e4a0628fbb3c661104ea24a5e1de9a58,PodSandboxId:18af6c77652eaf852d32c08b1f452ebcb57d868aed733e97287c3c80b91a45a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720470572525342990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c6249f6-bc62-458c-b9b6-f47a64cb6519 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.920296157Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c7f0836-4211-472e-a339-b6403e1093a3 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.920392692Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c7f0836-4211-472e-a339-b6403e1093a3 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.922137335Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=194ba031-93b8-4629-88df-e7695980fae3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.922805574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720470803922776151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=194ba031-93b8-4629-88df-e7695980fae3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.923494529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67e8cd23-2c35-45b4-bbd3-309ae2d38f8c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.923569479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67e8cd23-2c35-45b4-bbd3-309ae2d38f8c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.923988522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ea54c73e0f3726c901e14075c8f0809e8b173d25d9c91ce9d4ed2ff869e6062,PodSandboxId:6eb67e95826c021b12fa109d69ab787a87dd8a5871d50576c24982eaf6b0b807,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720470610178566973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76806f8a013ba1f2a9c54c275f108e7e849ffecce0b458befb76019314ca14d4,PodSandboxId:3af269b4aabae5c79730c4b4dbbbabdcf48d9f1ebba9c2add8e02e19219818ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720470576688646543,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54174b10cb5183999bad08287b0a89acebbfac005a775ceb383a4c23ce3412ac,PodSandboxId:e8e3fa51b35ad30cc477a592d8f09444768ccb4f87ad54e76a1422a60e8ae36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720470576691111320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546831c23c80e430aaab6e2a857e677f729f9290a275710847b09a7e355390e2,PodSandboxId:d733ea97b0533e3b2e08e9b2a913ee764189aafa0e159f7445c83ec05acb852d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720470576419730602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-d84244caf4a9,},Annotations:map[string]
string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5ecc0492b2c2f6027a891fdd6f93fdf7ef1cdded7ba8958191fdaeb2796517,PodSandboxId:c9b6d5d65f23ea51f1eb7acf065a1a27a735adfd72daef063db3832f9aa1942f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720470576435358325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.ku
bernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b516f0a686a5925ebc0bd4ea92a8b6383cf03e4469d7478996644bdea1e54bb,PodSandboxId:07a085bb954d4cbb5a5d1f6aab4fc0055cc0e42f8ca06aa7ae168fd6b3ae6f40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720470572669136148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5da15967827256185eb2546419913d851533e4e51e34d1f698de18415004dda,PodSandboxId:0ccb3568f0163fae07ca185ea0b7c8845d5822bff693b7b83af8c810ac2979bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720470572616418019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2951ca64535e2caa6003d7f7a75347625c078667561b7d1e59372f1df3eba911,PodSandboxId:6d36bac90520e3b1e53aaf308dcf46f20a2162e1c17121cd653c18cf4f0b7d6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720470572569974658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ac3f4ee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d1d879b7776b5cfc71dcaee948a028e4a0628fbb3c661104ea24a5e1de9a58,PodSandboxId:18af6c77652eaf852d32c08b1f452ebcb57d868aed733e97287c3c80b91a45a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720470572525342990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3a3b62e86e9a99cb9815651f876e76dc01fece2f3da4a883d24618d81d3df8,PodSandboxId:45daa79761639627232cb3faa9c11617d117aa5dc666dc134c89d04f8b4b77d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720470268406216186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c,PodSandboxId:d198d3e471da431c3023870c9d69519f87234f13cb13c3665bec4f8611ea0f09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720470225282207533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac,PodSandboxId:193c64f1ecc6a73d51c1762d70d307d30e2b434826143db013f1d44dddaca78e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720470224861704699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4,PodSandboxId:28bf5d2a49ccf088e781b2e0279eadf5d7b010921a8be7b053994a391c6c2e9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720470223366468421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c,PodSandboxId:d93dd4e73641f5652616875d582d89397e9f6498ab6011daf92d7734aca83bde,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720470223208155438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-
d84244caf4a9,},Annotations:map[string]string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507,PodSandboxId:5a4433da8c657a6516644819f9fb27a5b949cbd2a194ca36cae94e87a58589bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720470203714068571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{
io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375,PodSandboxId:0fc745b8ee3be213a585f87aa31799a7a86a5df9b91557bf723514cbac0709ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720470203773386860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db,PodSandboxId:80bae309ed5a22feb2eac1649026ca650831da62c3c1a44d119edb2b7ce40bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720470203669705068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024,PodSandboxId:d02b3fe8a7e16c5369682d53bb8df678bc4f28ed1bb7d846398c856dd394c579,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720470203639895629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: ac3f4ee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67e8cd23-2c35-45b4-bbd3-309ae2d38f8c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.971659756Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4393e3c3-14db-432e-9840-fcefb4533862 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.971763336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4393e3c3-14db-432e-9840-fcefb4533862 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.973175411Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fcf4289-a3e2-48c2-bbd1-e58e75a03412 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.973851911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720470803973812892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fcf4289-a3e2-48c2-bbd1-e58e75a03412 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.974393246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4decfde-84fa-4436-9d48-a817a5bb7a7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.974445921Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4decfde-84fa-4436-9d48-a817a5bb7a7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:33:23 multinode-957088 crio[2827]: time="2024-07-08 20:33:23.976507023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ea54c73e0f3726c901e14075c8f0809e8b173d25d9c91ce9d4ed2ff869e6062,PodSandboxId:6eb67e95826c021b12fa109d69ab787a87dd8a5871d50576c24982eaf6b0b807,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1720470610178566973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76806f8a013ba1f2a9c54c275f108e7e849ffecce0b458befb76019314ca14d4,PodSandboxId:3af269b4aabae5c79730c4b4dbbbabdcf48d9f1ebba9c2add8e02e19219818ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1720470576688646543,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54174b10cb5183999bad08287b0a89acebbfac005a775ceb383a4c23ce3412ac,PodSandboxId:e8e3fa51b35ad30cc477a592d8f09444768ccb4f87ad54e76a1422a60e8ae36c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720470576691111320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546831c23c80e430aaab6e2a857e677f729f9290a275710847b09a7e355390e2,PodSandboxId:d733ea97b0533e3b2e08e9b2a913ee764189aafa0e159f7445c83ec05acb852d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720470576419730602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-d84244caf4a9,},Annotations:map[string]
string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5ecc0492b2c2f6027a891fdd6f93fdf7ef1cdded7ba8958191fdaeb2796517,PodSandboxId:c9b6d5d65f23ea51f1eb7acf065a1a27a735adfd72daef063db3832f9aa1942f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720470576435358325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.ku
bernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b516f0a686a5925ebc0bd4ea92a8b6383cf03e4469d7478996644bdea1e54bb,PodSandboxId:07a085bb954d4cbb5a5d1f6aab4fc0055cc0e42f8ca06aa7ae168fd6b3ae6f40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720470572669136148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5da15967827256185eb2546419913d851533e4e51e34d1f698de18415004dda,PodSandboxId:0ccb3568f0163fae07ca185ea0b7c8845d5822bff693b7b83af8c810ac2979bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720470572616418019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2951ca64535e2caa6003d7f7a75347625c078667561b7d1e59372f1df3eba911,PodSandboxId:6d36bac90520e3b1e53aaf308dcf46f20a2162e1c17121cd653c18cf4f0b7d6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720470572569974658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ac3f4ee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d1d879b7776b5cfc71dcaee948a028e4a0628fbb3c661104ea24a5e1de9a58,PodSandboxId:18af6c77652eaf852d32c08b1f452ebcb57d868aed733e97287c3c80b91a45a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720470572525342990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3a3b62e86e9a99cb9815651f876e76dc01fece2f3da4a883d24618d81d3df8,PodSandboxId:45daa79761639627232cb3faa9c11617d117aa5dc666dc134c89d04f8b4b77d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1720470268406216186,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fqkrd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c920ac4a-fa2f-4e6a-a937-650806f738ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7413232a,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baefad39c2fab79c3b4445fbf12c07192459c3aa2a01861878418918377f387c,PodSandboxId:d198d3e471da431c3023870c9d69519f87234f13cb13c3665bec4f8611ea0f09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720470225282207533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a1bfd34-af18-42f9-92c6-e5a902ca9229,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf8f84,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac,PodSandboxId:193c64f1ecc6a73d51c1762d70d307d30e2b434826143db013f1d44dddaca78e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720470224861704699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v92sb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26175213-712f-41f8-b39b-ba4691346d29,},Annotations:map[string]string{io.kubernetes.container.hash: a8f8fef2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4,PodSandboxId:28bf5d2a49ccf088e781b2e0279eadf5d7b010921a8be7b053994a391c6c2e9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1720470223366468421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9t7dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26461f24-d94c-4eaa-bfa7-0633c4c556e8,},Annotations:map[string]string{io.kubernetes.container.hash: b175a433,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c,PodSandboxId:d93dd4e73641f5652616875d582d89397e9f6498ab6011daf92d7734aca83bde,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720470223208155438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfhs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 804d9347-ea15-4821-819b-
d84244caf4a9,},Annotations:map[string]string{io.kubernetes.container.hash: 74d0ac8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507,PodSandboxId:5a4433da8c657a6516644819f9fb27a5b949cbd2a194ca36cae94e87a58589bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720470203714068571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7064efee6c16d289f49531b6c5b5476,},Annotations:map[string]string{
io.kubernetes.container.hash: 13d177f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375,PodSandboxId:0fc745b8ee3be213a585f87aa31799a7a86a5df9b91557bf723514cbac0709ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720470203773386860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3698a636478babda3b4701b1de6df763,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db,PodSandboxId:80bae309ed5a22feb2eac1649026ca650831da62c3c1a44d119edb2b7ce40bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720470203669705068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc7c87d6ce269f042dc0b09281452ab,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024,PodSandboxId:d02b3fe8a7e16c5369682d53bb8df678bc4f28ed1bb7d846398c856dd394c579,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720470203639895629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-957088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c619af189d17108f8498ce8aa729508e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: ac3f4ee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4decfde-84fa-4436-9d48-a817a5bb7a7f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ea54c73e0f37       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   6eb67e95826c0       busybox-fc5497c4f-fqkrd
	54174b10cb518       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   e8e3fa51b35ad       coredns-7db6d8ff4d-v92sb
	76806f8a013ba       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   3af269b4aabae       kindnet-9t7dr
	1e5ecc0492b2c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   c9b6d5d65f23e       storage-provisioner
	546831c23c80e       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      3 minutes ago       Running             kube-proxy                1                   d733ea97b0533       kube-proxy-gfhs4
	5b516f0a686a5       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      3 minutes ago       Running             kube-scheduler            1                   07a085bb954d4       kube-scheduler-multinode-957088
	e5da159678272       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      3 minutes ago       Running             kube-controller-manager   1                   0ccb3568f0163       kube-controller-manager-multinode-957088
	2951ca64535e2       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      3 minutes ago       Running             kube-apiserver            1                   6d36bac90520e       kube-apiserver-multinode-957088
	03d1d879b7776       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   18af6c77652ea       etcd-multinode-957088
	fc3a3b62e86e9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   45daa79761639       busybox-fc5497c4f-fqkrd
	baefad39c2fab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   d198d3e471da4       storage-provisioner
	c830a371893b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   193c64f1ecc6a       coredns-7db6d8ff4d-v92sb
	eb391894abfdb       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      9 minutes ago       Exited              kindnet-cni               0                   28bf5d2a49ccf       kindnet-9t7dr
	5e5c1809cf82f       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      9 minutes ago       Exited              kube-proxy                0                   d93dd4e73641f       kube-proxy-gfhs4
	8494ebc50dfd8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      10 minutes ago      Exited              kube-scheduler            0                   0fc745b8ee3be       kube-scheduler-multinode-957088
	7316863a44cdb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   5a4433da8c657       etcd-multinode-957088
	3a84ba8bcb826       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      10 minutes ago      Exited              kube-controller-manager   0                   80bae309ed5a2       kube-controller-manager-multinode-957088
	bcae37a9f4a92       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      10 minutes ago      Exited              kube-apiserver            0                   d02b3fe8a7e16       kube-apiserver-multinode-957088
	
	
	==> coredns [54174b10cb5183999bad08287b0a89acebbfac005a775ceb383a4c23ce3412ac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48696 - 57141 "HINFO IN 2699131153796909940.5949095140639304341. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010806425s
	
	
	==> coredns [c830a371893b1cf684be6fcbc77e7cd88e1b03a99117365b8fda67bfa0ab83ac] <==
	[INFO] 10.244.1.2:39051 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001987224s
	[INFO] 10.244.1.2:34623 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123358s
	[INFO] 10.244.1.2:39567 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091793s
	[INFO] 10.244.1.2:55230 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001520414s
	[INFO] 10.244.1.2:38977 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164138s
	[INFO] 10.244.1.2:53511 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135828s
	[INFO] 10.244.1.2:41184 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112672s
	[INFO] 10.244.0.3:36500 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101415s
	[INFO] 10.244.0.3:46921 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087103s
	[INFO] 10.244.0.3:34413 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110282s
	[INFO] 10.244.0.3:59170 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056095s
	[INFO] 10.244.1.2:48146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135599s
	[INFO] 10.244.1.2:54218 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087076s
	[INFO] 10.244.1.2:43963 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097581s
	[INFO] 10.244.1.2:60755 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069718s
	[INFO] 10.244.0.3:52977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122664s
	[INFO] 10.244.0.3:38629 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104144s
	[INFO] 10.244.0.3:43014 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182754s
	[INFO] 10.244.0.3:57813 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061916s
	[INFO] 10.244.1.2:34355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246777s
	[INFO] 10.244.1.2:47330 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117892s
	[INFO] 10.244.1.2:52551 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000160024s
	[INFO] 10.244.1.2:60704 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093704s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-957088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-957088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=multinode-957088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T20_23_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 20:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-957088
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:33:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:29:35 +0000   Mon, 08 Jul 2024 20:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:29:35 +0000   Mon, 08 Jul 2024 20:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:29:35 +0000   Mon, 08 Jul 2024 20:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:29:35 +0000   Mon, 08 Jul 2024 20:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    multinode-957088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58385afd92734749810a984c4698432d
	  System UUID:                58385afd-9273-4749-810a-984c4698432d
	  Boot ID:                    423b33e5-abaf-4580-b287-154ffa19f04b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fqkrd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 coredns-7db6d8ff4d-v92sb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m42s
	  kube-system                 etcd-multinode-957088                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m55s
	  kube-system                 kindnet-9t7dr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m42s
	  kube-system                 kube-apiserver-multinode-957088             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-controller-manager-multinode-957088    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-proxy-gfhs4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 kube-scheduler-multinode-957088             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m40s                  kube-proxy       
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  NodeHasSufficientPID     9m56s                  kubelet          Node multinode-957088 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m56s                  kubelet          Node multinode-957088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s                  kubelet          Node multinode-957088 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m56s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m43s                  node-controller  Node multinode-957088 event: Registered Node multinode-957088 in Controller
	  Normal  NodeReady                9m40s                  kubelet          Node multinode-957088 status is now: NodeReady
	  Normal  Starting                 3m53s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m53s)  kubelet          Node multinode-957088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m53s)  kubelet          Node multinode-957088 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m53s)  kubelet          Node multinode-957088 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m36s                  node-controller  Node multinode-957088 event: Registered Node multinode-957088 in Controller
	
	
	Name:               multinode-957088-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-957088-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=multinode-957088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_08T20_30_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 20:30:16 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-957088-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:30:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Jul 2024 20:30:47 +0000   Mon, 08 Jul 2024 20:31:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Jul 2024 20:30:47 +0000   Mon, 08 Jul 2024 20:31:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Jul 2024 20:30:47 +0000   Mon, 08 Jul 2024 20:31:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Jul 2024 20:30:47 +0000   Mon, 08 Jul 2024 20:31:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    multinode-957088-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 88b2e56c785e4d59b587b1a78b1fe471
	  System UUID:                88b2e56c-785e-4d59-b587-b1a78b1fe471
	  Boot ID:                    dd117f1f-1167-4125-9576-23734e9aaf73
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jmmbp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 kindnet-hlbwx              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m8s
	  kube-system                 kube-proxy-pwshr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  Starting                 9m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  9m8s (x2 over 9m8s)  kubelet          Node multinode-957088-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m8s (x2 over 9m8s)  kubelet          Node multinode-957088-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m8s (x2 over 9m8s)  kubelet          Node multinode-957088-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m                   kubelet          Node multinode-957088-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)  kubelet          Node multinode-957088-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)  kubelet          Node multinode-957088-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)  kubelet          Node multinode-957088-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m                   kubelet          Node multinode-957088-m02 status is now: NodeReady
	  Normal  NodeNotReady             106s                 node-controller  Node multinode-957088-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056614] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.170082] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.146118] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.303307] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.316196] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.059303] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.553927] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.445612] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.617609] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.079379] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.419328] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.192172] systemd-fstab-generator[1468]: Ignoring "noauto" option for root device
	[Jul 8 20:24] kauditd_printk_skb: 84 callbacks suppressed
	[Jul 8 20:29] systemd-fstab-generator[2741]: Ignoring "noauto" option for root device
	[  +0.145148] systemd-fstab-generator[2753]: Ignoring "noauto" option for root device
	[  +0.174334] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +0.137407] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.295897] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +4.283876] systemd-fstab-generator[2910]: Ignoring "noauto" option for root device
	[  +0.088036] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.933379] systemd-fstab-generator[3035]: Ignoring "noauto" option for root device
	[  +4.679658] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.348404] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.515421] systemd-fstab-generator[3859]: Ignoring "noauto" option for root device
	[Jul 8 20:30] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [03d1d879b7776b5cfc71dcaee948a028e4a0628fbb3c661104ea24a5e1de9a58] <==
	{"level":"info","ts":"2024-07-08T20:29:32.997829Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T20:29:32.997857Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T20:29:33.008488Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T20:29:33.009199Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"efcba07991c99763","initial-advertise-peer-urls":["https://192.168.39.44:2380"],"listen-peer-urls":["https://192.168.39.44:2380"],"advertise-client-urls":["https://192.168.39.44:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.44:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T20:29:33.011861Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T20:29:33.011935Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.44:2380"}
	{"level":"info","ts":"2024-07-08T20:29:33.015719Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.44:2380"}
	{"level":"info","ts":"2024-07-08T20:29:34.138044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T20:29:34.138162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T20:29:34.138233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 received MsgPreVoteResp from efcba07991c99763 at term 2"}
	{"level":"info","ts":"2024-07-08T20:29:34.138271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 became candidate at term 3"}
	{"level":"info","ts":"2024-07-08T20:29:34.138295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 received MsgVoteResp from efcba07991c99763 at term 3"}
	{"level":"info","ts":"2024-07-08T20:29:34.138323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"efcba07991c99763 became leader at term 3"}
	{"level":"info","ts":"2024-07-08T20:29:34.138365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: efcba07991c99763 elected leader efcba07991c99763 at term 3"}
	{"level":"info","ts":"2024-07-08T20:29:34.144107Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"efcba07991c99763","local-member-attributes":"{Name:multinode-957088 ClientURLs:[https://192.168.39.44:2379]}","request-path":"/0/members/efcba07991c99763/attributes","cluster-id":"aad7d4b1c0e48cd8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T20:29:34.144412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:29:34.14447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T20:29:34.14452Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T20:29:34.144639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:29:34.146768Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T20:29:34.146836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.44:2379"}
	{"level":"info","ts":"2024-07-08T20:30:53.99407Z","caller":"traceutil/trace.go:171","msg":"trace[633866623] linearizableReadLoop","detail":"{readStateIndex:1190; appliedIndex:1189; }","duration":"129.22436ms","start":"2024-07-08T20:30:53.864812Z","end":"2024-07-08T20:30:53.994037Z","steps":["trace[633866623] 'read index received'  (duration: 128.236067ms)","trace[633866623] 'applied index is now lower than readState.Index'  (duration: 987.473µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T20:30:53.994425Z","caller":"traceutil/trace.go:171","msg":"trace[1613955756] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"168.075541ms","start":"2024-07-08T20:30:53.826333Z","end":"2024-07-08T20:30:53.994409Z","steps":["trace[1613955756] 'process raft request'  (duration: 166.806617ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T20:30:53.994718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.836295ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:3 size:13043"}
	{"level":"info","ts":"2024-07-08T20:30:53.995353Z","caller":"traceutil/trace.go:171","msg":"trace[290880143] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:3; response_revision:1087; }","duration":"130.531803ms","start":"2024-07-08T20:30:53.864807Z","end":"2024-07-08T20:30:53.995339Z","steps":["trace[290880143] 'agreement among raft nodes before linearized reading'  (duration: 129.65892ms)"],"step_count":1}
	
	
	==> etcd [7316863a44cdb8996e1c0bd3e57ecdaaf498dd11847872e58d38f31d98da9507] <==
	{"level":"info","ts":"2024-07-08T20:23:25.04599Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T20:23:25.049291Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T20:23:25.05328Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.44:2379"}
	{"level":"info","ts":"2024-07-08T20:24:16.750874Z","caller":"traceutil/trace.go:171","msg":"trace[17002875] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"110.917883ms","start":"2024-07-08T20:24:16.639926Z","end":"2024-07-08T20:24:16.750843Z","steps":["trace[17002875] 'process raft request'  (duration: 98.954758ms)","trace[17002875] 'compare'  (duration: 11.531297ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T20:24:16.750931Z","caller":"traceutil/trace.go:171","msg":"trace[2089420998] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"107.977547ms","start":"2024-07-08T20:24:16.642943Z","end":"2024-07-08T20:24:16.75092Z","steps":["trace[2089420998] 'process raft request'  (duration: 107.591659ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T20:25:02.626002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.136435ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10908721687817023209 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-957088-m03.17e0569fe29a73be\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-957088-m03.17e0569fe29a73be\" value_size:646 lease:1685349650962247399 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-08T20:25:02.626415Z","caller":"traceutil/trace.go:171","msg":"trace[2045240650] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"239.454997ms","start":"2024-07-08T20:25:02.386936Z","end":"2024-07-08T20:25:02.626391Z","steps":["trace[2045240650] 'process raft request'  (duration: 83.177203ms)","trace[2045240650] 'compare'  (duration: 154.876049ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T20:25:02.62658Z","caller":"traceutil/trace.go:171","msg":"trace[2032193689] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"181.373929ms","start":"2024-07-08T20:25:02.445194Z","end":"2024-07-08T20:25:02.626568Z","steps":["trace[2032193689] 'process raft request'  (duration: 181.036684ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T20:25:04.636422Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.927755ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10908721687817023262 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-9qh7b\" mod_revision:578 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-9qh7b\" value_size:4591 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-9qh7b\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-08T20:25:04.636529Z","caller":"traceutil/trace.go:171","msg":"trace[2027595840] linearizableReadLoop","detail":"{readStateIndex:626; appliedIndex:625; }","duration":"182.570685ms","start":"2024-07-08T20:25:04.453945Z","end":"2024-07-08T20:25:04.636515Z","steps":["trace[2027595840] 'read index received'  (duration: 52.411947ms)","trace[2027595840] 'applied index is now lower than readState.Index'  (duration: 130.157644ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T20:25:04.636819Z","caller":"traceutil/trace.go:171","msg":"trace[1842556120] transaction","detail":"{read_only:false; response_revision:595; number_of_response:1; }","duration":"217.490941ms","start":"2024-07-08T20:25:04.419314Z","end":"2024-07-08T20:25:04.636805Z","steps":["trace[1842556120] 'process raft request'  (duration: 86.963846ms)","trace[1842556120] 'compare'  (duration: 129.816738ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-08T20:25:04.637003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.047522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2024-07-08T20:25:04.637049Z","caller":"traceutil/trace.go:171","msg":"trace[631158454] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:595; }","duration":"183.169661ms","start":"2024-07-08T20:25:04.453871Z","end":"2024-07-08T20:25:04.637041Z","steps":["trace[631158454] 'agreement among raft nodes before linearized reading'  (duration: 183.09019ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T20:25:04.637225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.212544ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-07-08T20:25:04.637271Z","caller":"traceutil/trace.go:171","msg":"trace[1083375330] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:595; }","duration":"183.282905ms","start":"2024-07-08T20:25:04.453975Z","end":"2024-07-08T20:25:04.637258Z","steps":["trace[1083375330] 'agreement among raft nodes before linearized reading'  (duration: 183.223004ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T20:27:53.332763Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-08T20:27:53.33288Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-957088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.44:2380"],"advertise-client-urls":["https://192.168.39.44:2379"]}
	{"level":"warn","ts":"2024-07-08T20:27:53.332985Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T20:27:53.333084Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T20:27:53.377468Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.44:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T20:27:53.377557Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.44:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-08T20:27:53.377716Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"efcba07991c99763","current-leader-member-id":"efcba07991c99763"}
	{"level":"info","ts":"2024-07-08T20:27:53.381534Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.44:2380"}
	{"level":"info","ts":"2024-07-08T20:27:53.381759Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.44:2380"}
	{"level":"info","ts":"2024-07-08T20:27:53.381796Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-957088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.44:2380"],"advertise-client-urls":["https://192.168.39.44:2379"]}
	
	
	==> kernel <==
	 20:33:24 up 10 min,  0 users,  load average: 0.29, 0.28, 0.16
	Linux multinode-957088 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [76806f8a013ba1f2a9c54c275f108e7e849ffecce0b458befb76019314ca14d4] <==
	I0708 20:32:17.767769       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:32:27.780959       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:32:27.780995       1 main.go:227] handling current node
	I0708 20:32:27.781019       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:32:27.781024       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:32:37.785654       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:32:37.785755       1 main.go:227] handling current node
	I0708 20:32:37.785782       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:32:37.785799       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:32:47.799280       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:32:47.799322       1 main.go:227] handling current node
	I0708 20:32:47.799333       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:32:47.799338       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:32:57.805703       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:32:57.805823       1 main.go:227] handling current node
	I0708 20:32:57.805848       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:32:57.805976       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:33:07.822384       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:33:07.822768       1 main.go:227] handling current node
	I0708 20:33:07.822866       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:33:07.822953       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:33:17.831672       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:33:17.831968       1 main.go:227] handling current node
	I0708 20:33:17.832073       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:33:17.832121       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [eb391894abfdb5c57a07aca93940cccdebc13c53818cd4f876536d009f4c14f4] <==
	I0708 20:27:04.378582       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:27:14.383717       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:27:14.383778       1 main.go:227] handling current node
	I0708 20:27:14.383800       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:27:14.383805       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:27:14.383931       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:27:14.383952       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:27:24.392851       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:27:24.392887       1 main.go:227] handling current node
	I0708 20:27:24.392898       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:27:24.392903       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:27:24.393002       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:27:24.393023       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:27:34.398063       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:27:34.398109       1 main.go:227] handling current node
	I0708 20:27:34.398120       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:27:34.398125       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:27:34.398232       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:27:34.398237       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	I0708 20:27:44.479879       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0708 20:27:44.479935       1 main.go:227] handling current node
	I0708 20:27:44.479951       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0708 20:27:44.479956       1 main.go:250] Node multinode-957088-m02 has CIDR [10.244.1.0/24] 
	I0708 20:27:44.480110       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0708 20:27:44.480135       1 main.go:250] Node multinode-957088-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2951ca64535e2caa6003d7f7a75347625c078667561b7d1e59372f1df3eba911] <==
	I0708 20:29:35.472556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0708 20:29:35.474021       1 aggregator.go:165] initial CRD sync complete...
	I0708 20:29:35.474062       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 20:29:35.474070       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 20:29:35.510135       1 shared_informer.go:320] Caches are synced for configmaps
	I0708 20:29:35.510224       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0708 20:29:35.519226       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 20:29:35.519269       1 policy_source.go:224] refreshing policies
	E0708 20:29:35.543461       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0708 20:29:35.573187       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 20:29:35.575472       1 cache.go:39] Caches are synced for autoregister controller
	I0708 20:29:35.608811       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0708 20:29:35.611085       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0708 20:29:35.611320       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0708 20:29:35.611451       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0708 20:29:35.612577       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0708 20:29:35.617683       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0708 20:29:36.419165       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0708 20:29:37.855818       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 20:29:37.974175       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 20:29:37.987901       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 20:29:38.067400       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 20:29:38.074903       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0708 20:29:48.566228       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 20:29:48.622849       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [bcae37a9f4a928982ec835a7508d8e28b3c0ca53038cb7153b171890b806e024] <==
	E0708 20:23:28.769907       1 timeout.go:142] post-timeout activity - time-elapsed: 2.893663ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0708 20:23:28.998144       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 20:23:29.044056       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0708 20:23:29.060886       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 20:23:42.424405       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0708 20:23:42.503083       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0708 20:24:29.655101       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35086: use of closed network connection
	E0708 20:24:29.833129       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35102: use of closed network connection
	E0708 20:24:30.019120       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35116: use of closed network connection
	E0708 20:24:30.209336       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35128: use of closed network connection
	E0708 20:24:30.380777       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35158: use of closed network connection
	E0708 20:24:30.545966       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35174: use of closed network connection
	E0708 20:24:30.826903       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35214: use of closed network connection
	E0708 20:24:31.024241       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35232: use of closed network connection
	E0708 20:24:31.203948       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35256: use of closed network connection
	E0708 20:24:31.377520       1 conn.go:339] Error on socket receive: read tcp 192.168.39.44:8443->192.168.39.1:35280: use of closed network connection
	I0708 20:27:53.323755       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0708 20:27:53.341228       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.341349       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.341507       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.341915       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.341995       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.342087       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.342148       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0708 20:27:53.342575       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [3a84ba8bcb82697692a00135c5f81975047f802b58e72fccfc320d8f2f8fe2db] <==
	I0708 20:24:16.827802       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-957088-m02"
	I0708 20:24:16.847904       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-957088-m02" podCIDRs=["10.244.1.0/24"]
	I0708 20:24:24.468241       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:24:26.957178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.447029ms"
	I0708 20:24:26.973276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.029062ms"
	I0708 20:24:26.974445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.491µs"
	I0708 20:24:26.984402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.259µs"
	I0708 20:24:26.988419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="134.314µs"
	I0708 20:24:28.649651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.802441ms"
	I0708 20:24:28.649981       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.721µs"
	I0708 20:24:29.213176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.364265ms"
	I0708 20:24:29.213419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.471µs"
	I0708 20:25:02.630099       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:25:02.633439       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-957088-m03\" does not exist"
	I0708 20:25:02.668817       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-957088-m03" podCIDRs=["10.244.2.0/24"]
	I0708 20:25:06.853126       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-957088-m03"
	I0708 20:25:10.704305       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:25:39.397372       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:25:40.554526       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-957088-m03\" does not exist"
	I0708 20:25:40.555227       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:25:40.570766       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-957088-m03" podCIDRs=["10.244.3.0/24"]
	I0708 20:25:47.733320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:26:31.905493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m03"
	I0708 20:26:31.966992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.015186ms"
	I0708 20:26:31.967459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.187µs"
	
	
	==> kube-controller-manager [e5da15967827256185eb2546419913d851533e4e51e34d1f698de18415004dda] <==
	I0708 20:30:16.874000       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-957088-m02" podCIDRs=["10.244.1.0/24"]
	I0708 20:30:18.691219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.171µs"
	I0708 20:30:18.752145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.624µs"
	I0708 20:30:18.798402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.413µs"
	I0708 20:30:18.808384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.619µs"
	I0708 20:30:18.815490       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.239µs"
	I0708 20:30:18.822380       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.771µs"
	I0708 20:30:18.827086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="200.613µs"
	I0708 20:30:24.039850       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:30:24.057285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.326µs"
	I0708 20:30:24.070438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.91µs"
	I0708 20:30:26.311565       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.709611ms"
	I0708 20:30:26.312047       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.11µs"
	I0708 20:30:42.429570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:30:43.532529       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:30:43.532740       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-957088-m03\" does not exist"
	I0708 20:30:43.543461       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-957088-m03" podCIDRs=["10.244.2.0/24"]
	I0708 20:30:57.167430       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:31:02.705366       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-957088-m02"
	I0708 20:31:38.706206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.716873ms"
	I0708 20:31:38.708487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.555µs"
	I0708 20:31:48.515115       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-znnpz"
	I0708 20:31:48.545784       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-znnpz"
	I0708 20:31:48.545825       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-9qh7b"
	I0708 20:31:48.570378       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-9qh7b"
	
	
	==> kube-proxy [546831c23c80e430aaab6e2a857e677f729f9290a275710847b09a7e355390e2] <==
	I0708 20:29:36.800647       1 server_linux.go:69] "Using iptables proxy"
	I0708 20:29:36.831690       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.44"]
	I0708 20:29:36.909325       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 20:29:36.909378       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 20:29:36.909395       1 server_linux.go:165] "Using iptables Proxier"
	I0708 20:29:36.923227       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 20:29:36.923494       1 server.go:872] "Version info" version="v1.30.2"
	I0708 20:29:36.923522       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:29:36.925182       1 config.go:192] "Starting service config controller"
	I0708 20:29:36.925228       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 20:29:36.925255       1 config.go:101] "Starting endpoint slice config controller"
	I0708 20:29:36.925277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 20:29:36.925920       1 config.go:319] "Starting node config controller"
	I0708 20:29:36.925946       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 20:29:37.026169       1 shared_informer.go:320] Caches are synced for node config
	I0708 20:29:37.026252       1 shared_informer.go:320] Caches are synced for service config
	I0708 20:29:37.026290       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [5e5c1809cf82f453326374a8a1e7e69841af367b7ba2b9ff453f24433ddd384c] <==
	I0708 20:23:43.492668       1 server_linux.go:69] "Using iptables proxy"
	I0708 20:23:43.515278       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.44"]
	I0708 20:23:43.567583       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 20:23:43.567753       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 20:23:43.567784       1 server_linux.go:165] "Using iptables Proxier"
	I0708 20:23:43.570488       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 20:23:43.570755       1 server.go:872] "Version info" version="v1.30.2"
	I0708 20:23:43.570928       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:23:43.572241       1 config.go:192] "Starting service config controller"
	I0708 20:23:43.572289       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 20:23:43.572327       1 config.go:101] "Starting endpoint slice config controller"
	I0708 20:23:43.572343       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 20:23:43.573210       1 config.go:319] "Starting node config controller"
	I0708 20:23:43.573270       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 20:23:43.673102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 20:23:43.673143       1 shared_informer.go:320] Caches are synced for service config
	I0708 20:23:43.673361       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5b516f0a686a5925ebc0bd4ea92a8b6383cf03e4469d7478996644bdea1e54bb] <==
	I0708 20:29:33.578160       1 serving.go:380] Generated self-signed cert in-memory
	W0708 20:29:35.471574       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 20:29:35.471675       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 20:29:35.471745       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 20:29:35.471770       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 20:29:35.503159       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 20:29:35.503986       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:29:35.510645       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 20:29:35.510820       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 20:29:35.510858       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 20:29:35.510891       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0708 20:29:35.533850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 20:29:35.552667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 20:29:35.535228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 20:29:35.552739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0708 20:29:35.535371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 20:29:35.552755       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0708 20:29:35.614143       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8494ebc50dfd809995f525d1ea366c3d7afea7ae5890048246b57870d5bf3375] <==
	E0708 20:23:26.468733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 20:23:26.468044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:23:26.468751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 20:23:26.468086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 20:23:26.468764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 20:23:27.275883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 20:23:27.275933       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 20:23:27.381926       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 20:23:27.381987       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 20:23:27.391307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 20:23:27.391424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 20:23:27.583696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 20:23:27.583804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 20:23:27.647641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 20:23:27.647785       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 20:23:27.652294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 20:23:27.652415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 20:23:27.693084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:23:27.693229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 20:23:27.706240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0708 20:23:27.706339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0708 20:23:27.727525       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 20:23:27.727708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0708 20:23:30.258112       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0708 20:27:53.346370       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.843722    3042 topology_manager.go:215] "Topology Admit Handler" podUID="26175213-712f-41f8-b39b-ba4691346d29" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v92sb"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.843972    3042 topology_manager.go:215] "Topology Admit Handler" podUID="4a1bfd34-af18-42f9-92c6-e5a902ca9229" podNamespace="kube-system" podName="storage-provisioner"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.844077    3042 topology_manager.go:215] "Topology Admit Handler" podUID="c920ac4a-fa2f-4e6a-a937-650806f738ad" podNamespace="default" podName="busybox-fc5497c4f-fqkrd"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.859054    3042 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.871552    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/26461f24-d94c-4eaa-bfa7-0633c4c556e8-cni-cfg\") pod \"kindnet-9t7dr\" (UID: \"26461f24-d94c-4eaa-bfa7-0633c4c556e8\") " pod="kube-system/kindnet-9t7dr"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.871782    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26461f24-d94c-4eaa-bfa7-0633c4c556e8-xtables-lock\") pod \"kindnet-9t7dr\" (UID: \"26461f24-d94c-4eaa-bfa7-0633c4c556e8\") " pod="kube-system/kindnet-9t7dr"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.872010    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26461f24-d94c-4eaa-bfa7-0633c4c556e8-lib-modules\") pod \"kindnet-9t7dr\" (UID: \"26461f24-d94c-4eaa-bfa7-0633c4c556e8\") " pod="kube-system/kindnet-9t7dr"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.872109    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/804d9347-ea15-4821-819b-d84244caf4a9-xtables-lock\") pod \"kube-proxy-gfhs4\" (UID: \"804d9347-ea15-4821-819b-d84244caf4a9\") " pod="kube-system/kube-proxy-gfhs4"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.872286    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/804d9347-ea15-4821-819b-d84244caf4a9-lib-modules\") pod \"kube-proxy-gfhs4\" (UID: \"804d9347-ea15-4821-819b-d84244caf4a9\") " pod="kube-system/kube-proxy-gfhs4"
	Jul 08 20:29:35 multinode-957088 kubelet[3042]: I0708 20:29:35.872383    3042 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4a1bfd34-af18-42f9-92c6-e5a902ca9229-tmp\") pod \"storage-provisioner\" (UID: \"4a1bfd34-af18-42f9-92c6-e5a902ca9229\") " pod="kube-system/storage-provisioner"
	Jul 08 20:30:31 multinode-957088 kubelet[3042]: E0708 20:30:31.925789    3042 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:30:31 multinode-957088 kubelet[3042]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:30:31 multinode-957088 kubelet[3042]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:30:31 multinode-957088 kubelet[3042]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:30:31 multinode-957088 kubelet[3042]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 20:31:31 multinode-957088 kubelet[3042]: E0708 20:31:31.925567    3042 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:31:31 multinode-957088 kubelet[3042]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:31:31 multinode-957088 kubelet[3042]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:31:31 multinode-957088 kubelet[3042]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:31:31 multinode-957088 kubelet[3042]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 20:32:31 multinode-957088 kubelet[3042]: E0708 20:32:31.926492    3042 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 20:32:31 multinode-957088 kubelet[3042]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 20:32:31 multinode-957088 kubelet[3042]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 20:32:31 multinode-957088 kubelet[3042]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 20:32:31 multinode-957088 kubelet[3042]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:33:23.540747   45699 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19195-5988/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-957088 -n multinode-957088
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-957088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                    
x
+
TestPreload (168.86s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-309323 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-309323 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.386865832s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-309323 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-309323 image pull gcr.io/k8s-minikube/busybox: (1.088030897s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-309323
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-309323: (7.289674395s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-309323 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0708 20:39:23.843347   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-309323 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.01328932s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-309323 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-08 20:40:02.346314606 +0000 UTC m=+4262.259192401
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-309323 -n test-preload-309323
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-309323 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-309323 logs -n 25: (1.082407911s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n multinode-957088 sudo cat                                       | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /home/docker/cp-test_multinode-957088-m03_multinode-957088.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-957088 cp multinode-957088-m03:/home/docker/cp-test.txt                       | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m02:/home/docker/cp-test_multinode-957088-m03_multinode-957088-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n                                                                 | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | multinode-957088-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-957088 ssh -n multinode-957088-m02 sudo cat                                   | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | /home/docker/cp-test_multinode-957088-m03_multinode-957088-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-957088 node stop m03                                                          | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	| node    | multinode-957088 node start                                                             | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC | 08 Jul 24 20:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-957088                                                                | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC |                     |
	| stop    | -p multinode-957088                                                                     | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:25 UTC |                     |
	| start   | -p multinode-957088                                                                     | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:27 UTC | 08 Jul 24 20:30 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-957088                                                                | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:30 UTC |                     |
	| node    | multinode-957088 node delete                                                            | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:31 UTC | 08 Jul 24 20:31 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-957088 stop                                                                   | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:31 UTC |                     |
	| start   | -p multinode-957088                                                                     | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:33 UTC | 08 Jul 24 20:36 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-957088                                                                | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:36 UTC |                     |
	| start   | -p multinode-957088-m02                                                                 | multinode-957088-m02 | jenkins | v1.33.1 | 08 Jul 24 20:36 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-957088-m03                                                                 | multinode-957088-m03 | jenkins | v1.33.1 | 08 Jul 24 20:36 UTC | 08 Jul 24 20:37 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-957088                                                                 | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:37 UTC |                     |
	| delete  | -p multinode-957088-m03                                                                 | multinode-957088-m03 | jenkins | v1.33.1 | 08 Jul 24 20:37 UTC | 08 Jul 24 20:37 UTC |
	| delete  | -p multinode-957088                                                                     | multinode-957088     | jenkins | v1.33.1 | 08 Jul 24 20:37 UTC | 08 Jul 24 20:37 UTC |
	| start   | -p test-preload-309323                                                                  | test-preload-309323  | jenkins | v1.33.1 | 08 Jul 24 20:37 UTC | 08 Jul 24 20:38 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-309323 image pull                                                          | test-preload-309323  | jenkins | v1.33.1 | 08 Jul 24 20:38 UTC | 08 Jul 24 20:38 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-309323                                                                  | test-preload-309323  | jenkins | v1.33.1 | 08 Jul 24 20:38 UTC | 08 Jul 24 20:39 UTC |
	| start   | -p test-preload-309323                                                                  | test-preload-309323  | jenkins | v1.33.1 | 08 Jul 24 20:39 UTC | 08 Jul 24 20:40 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-309323 image list                                                          | test-preload-309323  | jenkins | v1.33.1 | 08 Jul 24 20:40 UTC | 08 Jul 24 20:40 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:39:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:39:00.155494   48058 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:39:00.155641   48058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:39:00.155653   48058 out.go:304] Setting ErrFile to fd 2...
	I0708 20:39:00.155659   48058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:39:00.155862   48058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:39:00.156387   48058 out.go:298] Setting JSON to false
	I0708 20:39:00.157253   48058 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4889,"bootTime":1720466251,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:39:00.157312   48058 start.go:139] virtualization: kvm guest
	I0708 20:39:00.159909   48058 out.go:177] * [test-preload-309323] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:39:00.161668   48058 notify.go:220] Checking for updates...
	I0708 20:39:00.161680   48058 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:39:00.163426   48058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:39:00.165257   48058 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:39:00.166687   48058 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:39:00.168054   48058 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:39:00.169519   48058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:39:00.171255   48058 config.go:182] Loaded profile config "test-preload-309323": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0708 20:39:00.171674   48058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:39:00.171721   48058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:39:00.186891   48058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42885
	I0708 20:39:00.187262   48058 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:39:00.187839   48058 main.go:141] libmachine: Using API Version  1
	I0708 20:39:00.187861   48058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:39:00.188173   48058 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:39:00.188351   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:00.190351   48058 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0708 20:39:00.191884   48058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:39:00.192202   48058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:39:00.192237   48058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:39:00.215509   48058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0708 20:39:00.215978   48058 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:39:00.216614   48058 main.go:141] libmachine: Using API Version  1
	I0708 20:39:00.216645   48058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:39:00.216971   48058 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:39:00.217200   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:00.252374   48058 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:39:00.253640   48058 start.go:297] selected driver: kvm2
	I0708 20:39:00.253653   48058 start.go:901] validating driver "kvm2" against &{Name:test-preload-309323 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-309323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:39:00.253746   48058 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:39:00.254426   48058 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:39:00.254485   48058 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:39:00.269246   48058 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:39:00.269548   48058 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:39:00.269573   48058 cni.go:84] Creating CNI manager for ""
	I0708 20:39:00.269581   48058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:39:00.269641   48058 start.go:340] cluster config:
	{Name:test-preload-309323 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-309323 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:39:00.269730   48058 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:39:00.272421   48058 out.go:177] * Starting "test-preload-309323" primary control-plane node in "test-preload-309323" cluster
	I0708 20:39:00.273633   48058 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0708 20:39:00.323679   48058 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0708 20:39:00.323706   48058 cache.go:56] Caching tarball of preloaded images
	I0708 20:39:00.323860   48058 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0708 20:39:00.325638   48058 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0708 20:39:00.327039   48058 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0708 20:39:00.352171   48058 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0708 20:39:03.842620   48058 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0708 20:39:03.842720   48058 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0708 20:39:04.684085   48058 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0708 20:39:04.684208   48058 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/config.json ...
	I0708 20:39:04.684419   48058 start.go:360] acquireMachinesLock for test-preload-309323: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:39:04.684477   48058 start.go:364] duration metric: took 37.612µs to acquireMachinesLock for "test-preload-309323"
	I0708 20:39:04.684493   48058 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:39:04.684504   48058 fix.go:54] fixHost starting: 
	I0708 20:39:04.684786   48058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:39:04.684818   48058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:39:04.699622   48058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I0708 20:39:04.700127   48058 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:39:04.700609   48058 main.go:141] libmachine: Using API Version  1
	I0708 20:39:04.700635   48058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:39:04.700923   48058 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:39:04.701147   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:04.701283   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetState
	I0708 20:39:04.702879   48058 fix.go:112] recreateIfNeeded on test-preload-309323: state=Stopped err=<nil>
	I0708 20:39:04.702909   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	W0708 20:39:04.703067   48058 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:39:04.705060   48058 out.go:177] * Restarting existing kvm2 VM for "test-preload-309323" ...
	I0708 20:39:04.706280   48058 main.go:141] libmachine: (test-preload-309323) Calling .Start
	I0708 20:39:04.706458   48058 main.go:141] libmachine: (test-preload-309323) Ensuring networks are active...
	I0708 20:39:04.707251   48058 main.go:141] libmachine: (test-preload-309323) Ensuring network default is active
	I0708 20:39:04.707660   48058 main.go:141] libmachine: (test-preload-309323) Ensuring network mk-test-preload-309323 is active
	I0708 20:39:04.708055   48058 main.go:141] libmachine: (test-preload-309323) Getting domain xml...
	I0708 20:39:04.708908   48058 main.go:141] libmachine: (test-preload-309323) Creating domain...
	I0708 20:39:05.914121   48058 main.go:141] libmachine: (test-preload-309323) Waiting to get IP...
	I0708 20:39:05.915000   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:05.915388   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:05.915481   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:05.915373   48109 retry.go:31] will retry after 286.437673ms: waiting for machine to come up
	I0708 20:39:06.204168   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:06.204640   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:06.204665   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:06.204593   48109 retry.go:31] will retry after 331.006462ms: waiting for machine to come up
	I0708 20:39:06.537080   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:06.537416   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:06.537444   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:06.537381   48109 retry.go:31] will retry after 294.742065ms: waiting for machine to come up
	I0708 20:39:06.833802   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:06.834320   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:06.834357   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:06.834247   48109 retry.go:31] will retry after 514.714498ms: waiting for machine to come up
	I0708 20:39:07.350946   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:07.351306   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:07.351335   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:07.351262   48109 retry.go:31] will retry after 649.512042ms: waiting for machine to come up
	I0708 20:39:08.001975   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:08.002421   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:08.002453   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:08.002376   48109 retry.go:31] will retry after 591.565292ms: waiting for machine to come up
	I0708 20:39:08.595066   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:08.595499   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:08.595522   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:08.595431   48109 retry.go:31] will retry after 945.816341ms: waiting for machine to come up
	I0708 20:39:09.542525   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:09.542909   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:09.542933   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:09.542870   48109 retry.go:31] will retry after 1.468696532s: waiting for machine to come up
	I0708 20:39:11.013998   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:11.014442   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:11.014470   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:11.014393   48109 retry.go:31] will retry after 1.760196659s: waiting for machine to come up
	I0708 20:39:12.777586   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:12.777984   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:12.778013   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:12.777939   48109 retry.go:31] will retry after 1.857184588s: waiting for machine to come up
	I0708 20:39:14.638027   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:14.638534   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:14.638557   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:14.638487   48109 retry.go:31] will retry after 1.806266791s: waiting for machine to come up
	I0708 20:39:16.446793   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:16.447176   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:16.447203   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:16.447130   48109 retry.go:31] will retry after 2.511174206s: waiting for machine to come up
	I0708 20:39:18.961839   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:18.962230   48058 main.go:141] libmachine: (test-preload-309323) DBG | unable to find current IP address of domain test-preload-309323 in network mk-test-preload-309323
	I0708 20:39:18.962265   48058 main.go:141] libmachine: (test-preload-309323) DBG | I0708 20:39:18.962192   48109 retry.go:31] will retry after 4.506980512s: waiting for machine to come up
	I0708 20:39:23.473439   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.473920   48058 main.go:141] libmachine: (test-preload-309323) Found IP for machine: 192.168.39.13
	I0708 20:39:23.473952   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has current primary IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.473964   48058 main.go:141] libmachine: (test-preload-309323) Reserving static IP address...
	I0708 20:39:23.474302   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "test-preload-309323", mac: "52:54:00:9d:32:16", ip: "192.168.39.13"} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:23.474326   48058 main.go:141] libmachine: (test-preload-309323) DBG | skip adding static IP to network mk-test-preload-309323 - found existing host DHCP lease matching {name: "test-preload-309323", mac: "52:54:00:9d:32:16", ip: "192.168.39.13"}
	I0708 20:39:23.474339   48058 main.go:141] libmachine: (test-preload-309323) Reserved static IP address: 192.168.39.13
	I0708 20:39:23.474350   48058 main.go:141] libmachine: (test-preload-309323) Waiting for SSH to be available...
	I0708 20:39:23.474375   48058 main.go:141] libmachine: (test-preload-309323) DBG | Getting to WaitForSSH function...
	I0708 20:39:23.476455   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.476805   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:23.476835   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.476956   48058 main.go:141] libmachine: (test-preload-309323) DBG | Using SSH client type: external
	I0708 20:39:23.476982   48058 main.go:141] libmachine: (test-preload-309323) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/test-preload-309323/id_rsa (-rw-------)
	I0708 20:39:23.477012   48058 main.go:141] libmachine: (test-preload-309323) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/test-preload-309323/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:39:23.477023   48058 main.go:141] libmachine: (test-preload-309323) DBG | About to run SSH command:
	I0708 20:39:23.477033   48058 main.go:141] libmachine: (test-preload-309323) DBG | exit 0
	I0708 20:39:23.599590   48058 main.go:141] libmachine: (test-preload-309323) DBG | SSH cmd err, output: <nil>: 
	I0708 20:39:23.599968   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetConfigRaw
	I0708 20:39:23.600585   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetIP
	I0708 20:39:23.603348   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.603731   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:23.603760   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.604006   48058 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/config.json ...
	I0708 20:39:23.604203   48058 machine.go:94] provisionDockerMachine start ...
	I0708 20:39:23.604220   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:23.604482   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:23.606594   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.606876   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:23.606912   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.606995   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:23.607161   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:23.607353   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:23.607486   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:23.607639   48058 main.go:141] libmachine: Using SSH client type: native
	I0708 20:39:23.607885   48058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0708 20:39:23.607900   48058 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:39:23.711874   48058 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:39:23.711920   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetMachineName
	I0708 20:39:23.712214   48058 buildroot.go:166] provisioning hostname "test-preload-309323"
	I0708 20:39:23.712238   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetMachineName
	I0708 20:39:23.712430   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:23.715000   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.715353   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:23.715386   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.715849   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:23.716080   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:23.716247   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:23.716371   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:23.716526   48058 main.go:141] libmachine: Using SSH client type: native
	I0708 20:39:23.716744   48058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0708 20:39:23.716767   48058 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-309323 && echo "test-preload-309323" | sudo tee /etc/hostname
	I0708 20:39:23.833926   48058 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-309323
	
	I0708 20:39:23.833950   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:23.836640   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.836995   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:23.837017   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.837204   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:23.837402   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:23.837567   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:23.837724   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:23.837909   48058 main.go:141] libmachine: Using SSH client type: native
	I0708 20:39:23.838066   48058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0708 20:39:23.838081   48058 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-309323' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-309323/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-309323' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:39:23.952503   48058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:39:23.952532   48058 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:39:23.952566   48058 buildroot.go:174] setting up certificates
	I0708 20:39:23.952576   48058 provision.go:84] configureAuth start
	I0708 20:39:23.952584   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetMachineName
	I0708 20:39:23.952950   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetIP
	I0708 20:39:23.955438   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.955734   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:23.955765   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.955885   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:23.957875   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.958134   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:23.958164   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:23.958220   48058 provision.go:143] copyHostCerts
	I0708 20:39:23.958295   48058 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:39:23.958306   48058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:39:23.958384   48058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:39:23.958485   48058 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:39:23.958495   48058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:39:23.958531   48058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:39:23.958601   48058 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:39:23.958610   48058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:39:23.958642   48058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:39:23.958706   48058 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.test-preload-309323 san=[127.0.0.1 192.168.39.13 localhost minikube test-preload-309323]
	I0708 20:39:24.226507   48058 provision.go:177] copyRemoteCerts
	I0708 20:39:24.226572   48058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:39:24.226608   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:24.229279   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.229590   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:24.229624   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.229753   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:24.229916   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:24.230121   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:24.230249   48058 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/test-preload-309323/id_rsa Username:docker}
	I0708 20:39:24.314050   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 20:39:24.342313   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:39:24.367041   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0708 20:39:24.391401   48058 provision.go:87] duration metric: took 438.814805ms to configureAuth
	I0708 20:39:24.391430   48058 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:39:24.391636   48058 config.go:182] Loaded profile config "test-preload-309323": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0708 20:39:24.391719   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:24.394280   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.394602   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:24.394634   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.394743   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:24.394926   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:24.395129   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:24.395271   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:24.395437   48058 main.go:141] libmachine: Using SSH client type: native
	I0708 20:39:24.395629   48058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0708 20:39:24.395651   48058 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:39:24.661849   48058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:39:24.661880   48058 machine.go:97] duration metric: took 1.057663597s to provisionDockerMachine
	I0708 20:39:24.661895   48058 start.go:293] postStartSetup for "test-preload-309323" (driver="kvm2")
	I0708 20:39:24.661907   48058 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:39:24.661921   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:24.662215   48058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:39:24.662240   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:24.664738   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.665038   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:24.665066   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.665223   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:24.665422   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:24.665573   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:24.665703   48058 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/test-preload-309323/id_rsa Username:docker}
	I0708 20:39:24.750265   48058 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:39:24.754481   48058 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:39:24.754508   48058 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:39:24.754590   48058 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:39:24.754666   48058 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:39:24.754745   48058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:39:24.763824   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:39:24.787374   48058 start.go:296] duration metric: took 125.465908ms for postStartSetup
	I0708 20:39:24.787420   48058 fix.go:56] duration metric: took 20.102920083s for fixHost
	I0708 20:39:24.787442   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:24.789821   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.790099   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:24.790128   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.790246   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:24.790427   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:24.790538   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:24.790632   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:24.790736   48058 main.go:141] libmachine: Using SSH client type: native
	I0708 20:39:24.790885   48058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0708 20:39:24.790894   48058 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:39:24.896171   48058 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720471164.871952687
	
	I0708 20:39:24.896205   48058 fix.go:216] guest clock: 1720471164.871952687
	I0708 20:39:24.896212   48058 fix.go:229] Guest: 2024-07-08 20:39:24.871952687 +0000 UTC Remote: 2024-07-08 20:39:24.787424793 +0000 UTC m=+24.665250857 (delta=84.527894ms)
	I0708 20:39:24.896231   48058 fix.go:200] guest clock delta is within tolerance: 84.527894ms
	I0708 20:39:24.896235   48058 start.go:83] releasing machines lock for "test-preload-309323", held for 20.211748263s
	I0708 20:39:24.896257   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:24.896529   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetIP
	I0708 20:39:24.899168   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.899582   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:24.899613   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.899776   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:24.900206   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:24.900382   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:24.900482   48058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:39:24.900562   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:24.900600   48058 ssh_runner.go:195] Run: cat /version.json
	I0708 20:39:24.900626   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:24.903074   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.903430   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:24.903471   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.903494   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.903608   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:24.903791   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:24.903899   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:24.903923   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:24.903964   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:24.904095   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:24.904116   48058 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/test-preload-309323/id_rsa Username:docker}
	I0708 20:39:24.904238   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:24.904383   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:24.904550   48058 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/test-preload-309323/id_rsa Username:docker}
	I0708 20:39:24.980755   48058 ssh_runner.go:195] Run: systemctl --version
	I0708 20:39:25.004544   48058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:39:25.152111   48058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:39:25.158464   48058 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:39:25.158534   48058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:39:25.175105   48058 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:39:25.175132   48058 start.go:494] detecting cgroup driver to use...
	I0708 20:39:25.175207   48058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:39:25.192180   48058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:39:25.207033   48058 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:39:25.207085   48058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:39:25.222491   48058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:39:25.237247   48058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:39:25.358428   48058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:39:25.496896   48058 docker.go:233] disabling docker service ...
	I0708 20:39:25.496966   48058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:39:25.511691   48058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:39:25.525521   48058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:39:25.667950   48058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:39:25.784403   48058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:39:25.798511   48058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:39:25.816880   48058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0708 20:39:25.816953   48058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:39:25.827551   48058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:39:25.827613   48058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:39:25.838158   48058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:39:25.848468   48058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:39:25.859076   48058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:39:25.869953   48058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:39:25.881716   48058 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:39:25.899077   48058 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:39:25.910116   48058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:39:25.919885   48058 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:39:25.919935   48058 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:39:25.932693   48058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:39:25.942562   48058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:39:26.055706   48058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:39:26.185435   48058 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:39:26.185508   48058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:39:26.191095   48058 start.go:562] Will wait 60s for crictl version
	I0708 20:39:26.191148   48058 ssh_runner.go:195] Run: which crictl
	I0708 20:39:26.194764   48058 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:39:26.234944   48058 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:39:26.235018   48058 ssh_runner.go:195] Run: crio --version
	I0708 20:39:26.262554   48058 ssh_runner.go:195] Run: crio --version
	I0708 20:39:26.290176   48058 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0708 20:39:26.291482   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetIP
	I0708 20:39:26.294058   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:26.294367   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:26.294391   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:26.294580   48058 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:39:26.298735   48058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:39:26.311142   48058 kubeadm.go:877] updating cluster {Name:test-preload-309323 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-309323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:39:26.311255   48058 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0708 20:39:26.311299   48058 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:39:26.354422   48058 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0708 20:39:26.354478   48058 ssh_runner.go:195] Run: which lz4
	I0708 20:39:26.358551   48058 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:39:26.362578   48058 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:39:26.362608   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0708 20:39:27.934210   48058 crio.go:462] duration metric: took 1.575682295s to copy over tarball
	I0708 20:39:27.934287   48058 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:39:30.334929   48058 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.400607669s)
	I0708 20:39:30.334957   48058 crio.go:469] duration metric: took 2.400719105s to extract the tarball
	I0708 20:39:30.334965   48058 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:39:30.377049   48058 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:39:30.418131   48058 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0708 20:39:30.418159   48058 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 20:39:30.418216   48058 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:39:30.418248   48058 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0708 20:39:30.418269   48058 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0708 20:39:30.418284   48058 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0708 20:39:30.418322   48058 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0708 20:39:30.418365   48058 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0708 20:39:30.418391   48058 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0708 20:39:30.418406   48058 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 20:39:30.419953   48058 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0708 20:39:30.419964   48058 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 20:39:30.419973   48058 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:39:30.419983   48058 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0708 20:39:30.419951   48058 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0708 20:39:30.419996   48058 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0708 20:39:30.420010   48058 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0708 20:39:30.420005   48058 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0708 20:39:30.570833   48058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0708 20:39:30.571471   48058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0708 20:39:30.571713   48058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0708 20:39:30.576153   48058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0708 20:39:30.578371   48058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0708 20:39:30.583236   48058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0708 20:39:30.651217   48058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0708 20:39:30.707203   48058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:39:30.722555   48058 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0708 20:39:30.722593   48058 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0708 20:39:30.722597   48058 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0708 20:39:30.722609   48058 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0708 20:39:30.722647   48058 ssh_runner.go:195] Run: which crictl
	I0708 20:39:30.722647   48058 ssh_runner.go:195] Run: which crictl
	I0708 20:39:30.722678   48058 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0708 20:39:30.722710   48058 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 20:39:30.722749   48058 ssh_runner.go:195] Run: which crictl
	I0708 20:39:30.722750   48058 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0708 20:39:30.722824   48058 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0708 20:39:30.722852   48058 ssh_runner.go:195] Run: which crictl
	I0708 20:39:30.756665   48058 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0708 20:39:30.756711   48058 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0708 20:39:30.756763   48058 ssh_runner.go:195] Run: which crictl
	I0708 20:39:30.774487   48058 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0708 20:39:30.774551   48058 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0708 20:39:30.774599   48058 ssh_runner.go:195] Run: which crictl
	I0708 20:39:30.792833   48058 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0708 20:39:30.792880   48058 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0708 20:39:30.792927   48058 ssh_runner.go:195] Run: which crictl
	I0708 20:39:30.900060   48058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0708 20:39:30.900123   48058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0708 20:39:30.900067   48058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0708 20:39:30.900168   48058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0708 20:39:30.900243   48058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0708 20:39:30.900253   48058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0708 20:39:30.900296   48058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0708 20:39:31.043104   48058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0708 20:39:31.043213   48058 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0708 20:39:31.059221   48058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0708 20:39:31.059390   48058 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0708 20:39:31.068870   48058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0708 20:39:31.068895   48058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0708 20:39:31.068994   48058 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0708 20:39:31.069003   48058 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0708 20:39:31.070103   48058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0708 20:39:31.070177   48058 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0708 20:39:31.072094   48058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0708 20:39:31.072130   48058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0708 20:39:31.072168   48058 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0708 20:39:31.072190   48058 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0708 20:39:31.072192   48058 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0708 20:39:31.072222   48058 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0708 20:39:31.072230   48058 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0708 20:39:31.072173   48058 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0708 20:39:31.078366   48058 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0708 20:39:31.078547   48058 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0708 20:39:31.081110   48058 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0708 20:39:31.082024   48058 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0708 20:39:34.341537   48058 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.26927762s)
	I0708 20:39:34.341569   48058 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0708 20:39:34.341590   48058 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0708 20:39:34.341602   48058 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.26931308s)
	I0708 20:39:34.341632   48058 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0708 20:39:34.341636   48058 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0708 20:39:36.592815   48058 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.251154271s)
	I0708 20:39:36.592849   48058 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0708 20:39:36.592875   48058 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0708 20:39:36.592920   48058 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0708 20:39:37.351716   48058 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0708 20:39:37.351768   48058 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0708 20:39:37.351835   48058 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0708 20:39:37.696061   48058 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0708 20:39:37.696109   48058 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0708 20:39:37.696171   48058 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0708 20:39:38.146492   48058 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0708 20:39:38.146544   48058 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0708 20:39:38.146599   48058 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0708 20:39:38.289474   48058 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0708 20:39:38.289513   48058 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0708 20:39:38.289559   48058 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0708 20:39:39.132824   48058 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0708 20:39:39.132886   48058 cache_images.go:123] Successfully loaded all cached images
	I0708 20:39:39.132894   48058 cache_images.go:92] duration metric: took 8.714721466s to LoadCachedImages
	I0708 20:39:39.132910   48058 kubeadm.go:928] updating node { 192.168.39.13 8443 v1.24.4 crio true true} ...
	I0708 20:39:39.133054   48058 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-309323 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-309323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:39:39.133119   48058 ssh_runner.go:195] Run: crio config
	I0708 20:39:39.183779   48058 cni.go:84] Creating CNI manager for ""
	I0708 20:39:39.183806   48058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:39:39.183822   48058 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:39:39.183859   48058 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.13 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-309323 NodeName:test-preload-309323 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:39:39.184040   48058 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-309323"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:39:39.184111   48058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0708 20:39:39.193872   48058 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:39:39.193952   48058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:39:39.203035   48058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0708 20:39:39.219478   48058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:39:39.236268   48058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0708 20:39:39.253475   48058 ssh_runner.go:195] Run: grep 192.168.39.13	control-plane.minikube.internal$ /etc/hosts
	I0708 20:39:39.257241   48058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:39:39.269321   48058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:39:39.385958   48058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:39:39.403224   48058 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323 for IP: 192.168.39.13
	I0708 20:39:39.403246   48058 certs.go:194] generating shared ca certs ...
	I0708 20:39:39.403265   48058 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:39:39.403424   48058 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:39:39.403492   48058 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:39:39.403507   48058 certs.go:256] generating profile certs ...
	I0708 20:39:39.403595   48058 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/client.key
	I0708 20:39:39.403674   48058 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/apiserver.key.42c0ee17
	I0708 20:39:39.403738   48058 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/proxy-client.key
	I0708 20:39:39.403879   48058 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:39:39.403925   48058 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:39:39.403938   48058 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:39:39.403974   48058 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:39:39.403999   48058 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:39:39.404028   48058 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:39:39.404082   48058 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:39:39.405155   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:39:39.431694   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:39:39.472488   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:39:39.507462   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:39:39.542677   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0708 20:39:39.573251   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:39:39.608566   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:39:39.635211   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:39:39.659989   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:39:39.686099   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:39:39.710812   48058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:39:39.735627   48058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:39:39.753237   48058 ssh_runner.go:195] Run: openssl version
	I0708 20:39:39.759416   48058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:39:39.771094   48058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:39:39.775868   48058 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:39:39.775938   48058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:39:39.782229   48058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:39:39.793817   48058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:39:39.805186   48058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:39:39.809708   48058 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:39:39.809780   48058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:39:39.815390   48058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:39:39.826349   48058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:39:39.837331   48058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:39:39.841958   48058 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:39:39.842020   48058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:39:39.847655   48058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:39:39.858597   48058 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:39:39.863142   48058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:39:39.869168   48058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:39:39.875169   48058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:39:39.881198   48058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:39:39.887287   48058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:39:39.893261   48058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:39:39.898981   48058 kubeadm.go:391] StartCluster: {Name:test-preload-309323 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-309323 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:39:39.899067   48058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:39:39.899155   48058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:39:39.937291   48058 cri.go:89] found id: ""
	I0708 20:39:39.937359   48058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:39:39.948105   48058 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:39:39.948137   48058 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:39:39.948143   48058 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:39:39.948200   48058 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:39:39.958379   48058 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:39:39.958827   48058 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-309323" does not appear in /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:39:39.958937   48058 kubeconfig.go:62] /home/jenkins/minikube-integration/19195-5988/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-309323" cluster setting kubeconfig missing "test-preload-309323" context setting]
	I0708 20:39:39.959195   48058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:39:39.959793   48058 kapi.go:59] client config for test-preload-309323: &rest.Config{Host:"https://192.168.39.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/client.crt", KeyFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/client.key", CAFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfdf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 20:39:39.960283   48058 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:39:39.970325   48058 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.13
	I0708 20:39:39.970352   48058 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:39:39.970363   48058 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:39:39.970424   48058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:39:40.007294   48058 cri.go:89] found id: ""
	I0708 20:39:40.007368   48058 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:39:40.024664   48058 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:39:40.034687   48058 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:39:40.034708   48058 kubeadm.go:156] found existing configuration files:
	
	I0708 20:39:40.034754   48058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:39:40.044374   48058 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:39:40.044421   48058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:39:40.054241   48058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:39:40.063777   48058 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:39:40.063833   48058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:39:40.073536   48058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:39:40.082696   48058 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:39:40.082772   48058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:39:40.092253   48058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:39:40.101074   48058 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:39:40.101138   48058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:39:40.110536   48058 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:39:40.119980   48058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:39:40.217150   48058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:39:41.299051   48058 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.081868405s)
	I0708 20:39:41.299083   48058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:39:41.572716   48058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:39:41.632014   48058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:39:41.715412   48058 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:39:41.715531   48058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:39:42.216011   48058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:39:42.716524   48058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:39:42.755345   48058 api_server.go:72] duration metric: took 1.039934036s to wait for apiserver process to appear ...
	I0708 20:39:42.755368   48058 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:39:42.755394   48058 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0708 20:39:42.755812   48058 api_server.go:269] stopped: https://192.168.39.13:8443/healthz: Get "https://192.168.39.13:8443/healthz": dial tcp 192.168.39.13:8443: connect: connection refused
	I0708 20:39:43.255649   48058 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0708 20:39:46.962104   48058 api_server.go:279] https://192.168.39.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:39:46.962136   48058 api_server.go:103] status: https://192.168.39.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:39:46.962151   48058 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0708 20:39:47.002708   48058 api_server.go:279] https://192.168.39.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:39:47.002740   48058 api_server.go:103] status: https://192.168.39.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:39:47.256143   48058 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0708 20:39:47.262452   48058 api_server.go:279] https://192.168.39.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:39:47.262493   48058 api_server.go:103] status: https://192.168.39.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:39:47.756104   48058 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0708 20:39:47.762342   48058 api_server.go:279] https://192.168.39.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:39:47.762365   48058 api_server.go:103] status: https://192.168.39.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:39:48.255908   48058 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0708 20:39:48.261284   48058 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0708 20:39:48.267538   48058 api_server.go:141] control plane version: v1.24.4
	I0708 20:39:48.267564   48058 api_server.go:131] duration metric: took 5.512190246s to wait for apiserver health ...
	I0708 20:39:48.267573   48058 cni.go:84] Creating CNI manager for ""
	I0708 20:39:48.267578   48058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:39:48.269632   48058 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:39:48.271045   48058 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:39:48.281982   48058 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:39:48.303268   48058 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:39:48.311827   48058 system_pods.go:59] 8 kube-system pods found
	I0708 20:39:48.311860   48058 system_pods.go:61] "coredns-6d4b75cb6d-br7gk" [201511e1-16ae-4d46-8617-e169c95cdbc0] Running
	I0708 20:39:48.311864   48058 system_pods.go:61] "coredns-6d4b75cb6d-s6bz4" [9d0eb78c-aa32-4b4f-84db-fec283ac0e56] Running
	I0708 20:39:48.311871   48058 system_pods.go:61] "etcd-test-preload-309323" [879a2b51-a850-4b51-b728-4ba4be5d6bf4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:39:48.311875   48058 system_pods.go:61] "kube-apiserver-test-preload-309323" [36db87a2-13d3-45c3-afa7-31a134d51cb7] Running
	I0708 20:39:48.311886   48058 system_pods.go:61] "kube-controller-manager-test-preload-309323" [c3dbb172-afdd-4b15-8481-7fa7dc9c4f8a] Running
	I0708 20:39:48.311889   48058 system_pods.go:61] "kube-proxy-qpjgv" [2fc18f83-0a80-4fa2-9ae9-473123cad4ed] Running
	I0708 20:39:48.311894   48058 system_pods.go:61] "kube-scheduler-test-preload-309323" [414437b3-7540-4782-ab30-0e71e55e829f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:39:48.311898   48058 system_pods.go:61] "storage-provisioner" [20d6991d-d160-4b3a-b907-71165a101ce9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 20:39:48.311905   48058 system_pods.go:74] duration metric: took 8.614445ms to wait for pod list to return data ...
	I0708 20:39:48.311914   48058 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:39:48.315307   48058 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:39:48.315335   48058 node_conditions.go:123] node cpu capacity is 2
	I0708 20:39:48.315349   48058 node_conditions.go:105] duration metric: took 3.425743ms to run NodePressure ...
	I0708 20:39:48.315365   48058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:39:48.508680   48058 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:39:48.514377   48058 kubeadm.go:733] kubelet initialised
	I0708 20:39:48.514398   48058 kubeadm.go:734] duration metric: took 5.689854ms waiting for restarted kubelet to initialise ...
	I0708 20:39:48.514406   48058 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:39:48.520694   48058 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-br7gk" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:48.529507   48058 pod_ready.go:97] node "test-preload-309323" hosting pod "coredns-6d4b75cb6d-br7gk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:48.529529   48058 pod_ready.go:81] duration metric: took 8.809827ms for pod "coredns-6d4b75cb6d-br7gk" in "kube-system" namespace to be "Ready" ...
	E0708 20:39:48.529537   48058 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-309323" hosting pod "coredns-6d4b75cb6d-br7gk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:48.529547   48058 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-s6bz4" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:48.534698   48058 pod_ready.go:97] node "test-preload-309323" hosting pod "coredns-6d4b75cb6d-s6bz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:48.534721   48058 pod_ready.go:81] duration metric: took 5.165931ms for pod "coredns-6d4b75cb6d-s6bz4" in "kube-system" namespace to be "Ready" ...
	E0708 20:39:48.534729   48058 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-309323" hosting pod "coredns-6d4b75cb6d-s6bz4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:48.534736   48058 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:48.538669   48058 pod_ready.go:97] node "test-preload-309323" hosting pod "etcd-test-preload-309323" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:48.538692   48058 pod_ready.go:81] duration metric: took 3.946458ms for pod "etcd-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	E0708 20:39:48.538702   48058 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-309323" hosting pod "etcd-test-preload-309323" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:48.538709   48058 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:48.707480   48058 pod_ready.go:97] node "test-preload-309323" hosting pod "kube-apiserver-test-preload-309323" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:48.707511   48058 pod_ready.go:81] duration metric: took 168.79186ms for pod "kube-apiserver-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	E0708 20:39:48.707524   48058 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-309323" hosting pod "kube-apiserver-test-preload-309323" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:48.707532   48058 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:49.108317   48058 pod_ready.go:97] node "test-preload-309323" hosting pod "kube-controller-manager-test-preload-309323" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:49.108357   48058 pod_ready.go:81] duration metric: took 400.808068ms for pod "kube-controller-manager-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	E0708 20:39:49.108371   48058 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-309323" hosting pod "kube-controller-manager-test-preload-309323" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:49.108385   48058 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qpjgv" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:49.507548   48058 pod_ready.go:97] node "test-preload-309323" hosting pod "kube-proxy-qpjgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:49.507575   48058 pod_ready.go:81] duration metric: took 399.17567ms for pod "kube-proxy-qpjgv" in "kube-system" namespace to be "Ready" ...
	E0708 20:39:49.507584   48058 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-309323" hosting pod "kube-proxy-qpjgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:49.507590   48058 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:49.911534   48058 pod_ready.go:97] node "test-preload-309323" hosting pod "kube-scheduler-test-preload-309323" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:49.911558   48058 pod_ready.go:81] duration metric: took 403.962254ms for pod "kube-scheduler-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	E0708 20:39:49.911568   48058 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-309323" hosting pod "kube-scheduler-test-preload-309323" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:49.911575   48058 pod_ready.go:38] duration metric: took 1.397162229s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:39:49.911591   48058 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 20:39:49.929111   48058 ops.go:34] apiserver oom_adj: -16
	I0708 20:39:49.929129   48058 kubeadm.go:591] duration metric: took 9.980981032s to restartPrimaryControlPlane
	I0708 20:39:49.929144   48058 kubeadm.go:393] duration metric: took 10.030160931s to StartCluster
	I0708 20:39:49.929159   48058 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:39:49.929219   48058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:39:49.929783   48058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:39:49.929992   48058 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 20:39:49.930091   48058 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 20:39:49.930186   48058 addons.go:69] Setting storage-provisioner=true in profile "test-preload-309323"
	I0708 20:39:49.930204   48058 config.go:182] Loaded profile config "test-preload-309323": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0708 20:39:49.930214   48058 addons.go:234] Setting addon storage-provisioner=true in "test-preload-309323"
	W0708 20:39:49.930282   48058 addons.go:243] addon storage-provisioner should already be in state true
	I0708 20:39:49.930311   48058 host.go:66] Checking if "test-preload-309323" exists ...
	I0708 20:39:49.930209   48058 addons.go:69] Setting default-storageclass=true in profile "test-preload-309323"
	I0708 20:39:49.930349   48058 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-309323"
	I0708 20:39:49.930711   48058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:39:49.930725   48058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:39:49.930762   48058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:39:49.930855   48058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:39:49.931733   48058 out.go:177] * Verifying Kubernetes components...
	I0708 20:39:49.933305   48058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:39:49.945878   48058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I0708 20:39:49.946044   48058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
	I0708 20:39:49.946281   48058 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:39:49.946472   48058 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:39:49.946813   48058 main.go:141] libmachine: Using API Version  1
	I0708 20:39:49.946833   48058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:39:49.946899   48058 main.go:141] libmachine: Using API Version  1
	I0708 20:39:49.946922   48058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:39:49.947270   48058 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:39:49.947289   48058 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:39:49.947443   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetState
	I0708 20:39:49.947846   48058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:39:49.947888   48058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:39:49.949595   48058 kapi.go:59] client config for test-preload-309323: &rest.Config{Host:"https://192.168.39.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/client.crt", KeyFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/test-preload-309323/client.key", CAFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfdf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 20:39:49.949803   48058 addons.go:234] Setting addon default-storageclass=true in "test-preload-309323"
	W0708 20:39:49.949816   48058 addons.go:243] addon default-storageclass should already be in state true
	I0708 20:39:49.949841   48058 host.go:66] Checking if "test-preload-309323" exists ...
	I0708 20:39:49.950092   48058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:39:49.950125   48058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:39:49.963329   48058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0708 20:39:49.963780   48058 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:39:49.964088   48058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46157
	I0708 20:39:49.964329   48058 main.go:141] libmachine: Using API Version  1
	I0708 20:39:49.964354   48058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:39:49.964465   48058 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:39:49.964705   48058 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:39:49.964885   48058 main.go:141] libmachine: Using API Version  1
	I0708 20:39:49.964907   48058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:39:49.964918   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetState
	I0708 20:39:49.965215   48058 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:39:49.965734   48058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:39:49.965777   48058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:39:49.966506   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:49.968473   48058 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:39:49.969726   48058 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:39:49.969739   48058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 20:39:49.969752   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:49.972927   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:49.973375   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:49.973399   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:49.973554   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:49.973712   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:49.973847   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:49.973989   48058 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/test-preload-309323/id_rsa Username:docker}
	I0708 20:39:49.982264   48058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0708 20:39:49.982608   48058 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:39:49.983102   48058 main.go:141] libmachine: Using API Version  1
	I0708 20:39:49.983129   48058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:39:49.983427   48058 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:39:49.983623   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetState
	I0708 20:39:49.985035   48058 main.go:141] libmachine: (test-preload-309323) Calling .DriverName
	I0708 20:39:49.985239   48058 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 20:39:49.985254   48058 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 20:39:49.985269   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHHostname
	I0708 20:39:49.987881   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:49.988337   48058 main.go:141] libmachine: (test-preload-309323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:32:16", ip: ""} in network mk-test-preload-309323: {Iface:virbr1 ExpiryTime:2024-07-08 21:39:15 +0000 UTC Type:0 Mac:52:54:00:9d:32:16 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:test-preload-309323 Clientid:01:52:54:00:9d:32:16}
	I0708 20:39:49.988362   48058 main.go:141] libmachine: (test-preload-309323) DBG | domain test-preload-309323 has defined IP address 192.168.39.13 and MAC address 52:54:00:9d:32:16 in network mk-test-preload-309323
	I0708 20:39:49.988523   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHPort
	I0708 20:39:49.988687   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHKeyPath
	I0708 20:39:49.988828   48058 main.go:141] libmachine: (test-preload-309323) Calling .GetSSHUsername
	I0708 20:39:49.988951   48058 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/test-preload-309323/id_rsa Username:docker}
	I0708 20:39:50.128303   48058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:39:50.145486   48058 node_ready.go:35] waiting up to 6m0s for node "test-preload-309323" to be "Ready" ...
	I0708 20:39:50.211858   48058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 20:39:50.307987   48058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:39:51.101322   48058 main.go:141] libmachine: Making call to close driver server
	I0708 20:39:51.101349   48058 main.go:141] libmachine: (test-preload-309323) Calling .Close
	I0708 20:39:51.101643   48058 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:39:51.101663   48058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:39:51.101672   48058 main.go:141] libmachine: Making call to close driver server
	I0708 20:39:51.101679   48058 main.go:141] libmachine: (test-preload-309323) Calling .Close
	I0708 20:39:51.101884   48058 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:39:51.101904   48058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:39:51.101924   48058 main.go:141] libmachine: (test-preload-309323) DBG | Closing plugin on server side
	I0708 20:39:51.112627   48058 main.go:141] libmachine: Making call to close driver server
	I0708 20:39:51.112647   48058 main.go:141] libmachine: (test-preload-309323) Calling .Close
	I0708 20:39:51.112903   48058 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:39:51.112923   48058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:39:51.112922   48058 main.go:141] libmachine: (test-preload-309323) DBG | Closing plugin on server side
	I0708 20:39:51.140933   48058 main.go:141] libmachine: Making call to close driver server
	I0708 20:39:51.140957   48058 main.go:141] libmachine: (test-preload-309323) Calling .Close
	I0708 20:39:51.141226   48058 main.go:141] libmachine: (test-preload-309323) DBG | Closing plugin on server side
	I0708 20:39:51.141268   48058 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:39:51.141280   48058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:39:51.141293   48058 main.go:141] libmachine: Making call to close driver server
	I0708 20:39:51.141303   48058 main.go:141] libmachine: (test-preload-309323) Calling .Close
	I0708 20:39:51.141504   48058 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:39:51.141525   48058 main.go:141] libmachine: (test-preload-309323) DBG | Closing plugin on server side
	I0708 20:39:51.141532   48058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:39:51.143630   48058 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0708 20:39:51.144785   48058 addons.go:510] duration metric: took 1.214705585s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0708 20:39:52.151071   48058 node_ready.go:53] node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:54.649621   48058 node_ready.go:53] node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:57.149014   48058 node_ready.go:53] node "test-preload-309323" has status "Ready":"False"
	I0708 20:39:57.649701   48058 node_ready.go:49] node "test-preload-309323" has status "Ready":"True"
	I0708 20:39:57.649725   48058 node_ready.go:38] duration metric: took 7.504204952s for node "test-preload-309323" to be "Ready" ...
	I0708 20:39:57.649734   48058 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:39:57.655959   48058 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-s6bz4" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:57.661219   48058 pod_ready.go:92] pod "coredns-6d4b75cb6d-s6bz4" in "kube-system" namespace has status "Ready":"True"
	I0708 20:39:57.661241   48058 pod_ready.go:81] duration metric: took 5.253885ms for pod "coredns-6d4b75cb6d-s6bz4" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:57.661253   48058 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:57.666103   48058 pod_ready.go:92] pod "etcd-test-preload-309323" in "kube-system" namespace has status "Ready":"True"
	I0708 20:39:57.666123   48058 pod_ready.go:81] duration metric: took 4.862306ms for pod "etcd-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:57.666133   48058 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:57.671235   48058 pod_ready.go:92] pod "kube-apiserver-test-preload-309323" in "kube-system" namespace has status "Ready":"True"
	I0708 20:39:57.671260   48058 pod_ready.go:81] duration metric: took 5.113164ms for pod "kube-apiserver-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:57.671273   48058 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:39:59.678512   48058 pod_ready.go:102] pod "kube-controller-manager-test-preload-309323" in "kube-system" namespace has status "Ready":"False"
	I0708 20:40:01.180793   48058 pod_ready.go:92] pod "kube-controller-manager-test-preload-309323" in "kube-system" namespace has status "Ready":"True"
	I0708 20:40:01.180817   48058 pod_ready.go:81] duration metric: took 3.509537618s for pod "kube-controller-manager-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:40:01.180827   48058 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpjgv" in "kube-system" namespace to be "Ready" ...
	I0708 20:40:01.186434   48058 pod_ready.go:92] pod "kube-proxy-qpjgv" in "kube-system" namespace has status "Ready":"True"
	I0708 20:40:01.186454   48058 pod_ready.go:81] duration metric: took 5.621764ms for pod "kube-proxy-qpjgv" in "kube-system" namespace to be "Ready" ...
	I0708 20:40:01.186464   48058 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:40:01.251829   48058 pod_ready.go:92] pod "kube-scheduler-test-preload-309323" in "kube-system" namespace has status "Ready":"True"
	I0708 20:40:01.251854   48058 pod_ready.go:81] duration metric: took 65.384647ms for pod "kube-scheduler-test-preload-309323" in "kube-system" namespace to be "Ready" ...
	I0708 20:40:01.251864   48058 pod_ready.go:38] duration metric: took 3.602121255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:40:01.251875   48058 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:40:01.251927   48058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:40:01.268542   48058 api_server.go:72] duration metric: took 11.338518465s to wait for apiserver process to appear ...
	I0708 20:40:01.268573   48058 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:40:01.268590   48058 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0708 20:40:01.274683   48058 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0708 20:40:01.276386   48058 api_server.go:141] control plane version: v1.24.4
	I0708 20:40:01.276406   48058 api_server.go:131] duration metric: took 7.825973ms to wait for apiserver health ...
	I0708 20:40:01.276414   48058 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:40:01.453597   48058 system_pods.go:59] 7 kube-system pods found
	I0708 20:40:01.453637   48058 system_pods.go:61] "coredns-6d4b75cb6d-s6bz4" [9d0eb78c-aa32-4b4f-84db-fec283ac0e56] Running
	I0708 20:40:01.453644   48058 system_pods.go:61] "etcd-test-preload-309323" [879a2b51-a850-4b51-b728-4ba4be5d6bf4] Running
	I0708 20:40:01.453650   48058 system_pods.go:61] "kube-apiserver-test-preload-309323" [36db87a2-13d3-45c3-afa7-31a134d51cb7] Running
	I0708 20:40:01.453654   48058 system_pods.go:61] "kube-controller-manager-test-preload-309323" [c3dbb172-afdd-4b15-8481-7fa7dc9c4f8a] Running
	I0708 20:40:01.453659   48058 system_pods.go:61] "kube-proxy-qpjgv" [2fc18f83-0a80-4fa2-9ae9-473123cad4ed] Running
	I0708 20:40:01.453663   48058 system_pods.go:61] "kube-scheduler-test-preload-309323" [414437b3-7540-4782-ab30-0e71e55e829f] Running
	I0708 20:40:01.453668   48058 system_pods.go:61] "storage-provisioner" [20d6991d-d160-4b3a-b907-71165a101ce9] Running
	I0708 20:40:01.453674   48058 system_pods.go:74] duration metric: took 177.254784ms to wait for pod list to return data ...
	I0708 20:40:01.453683   48058 default_sa.go:34] waiting for default service account to be created ...
	I0708 20:40:01.650544   48058 default_sa.go:45] found service account: "default"
	I0708 20:40:01.650570   48058 default_sa.go:55] duration metric: took 196.881917ms for default service account to be created ...
	I0708 20:40:01.650578   48058 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 20:40:01.852502   48058 system_pods.go:86] 7 kube-system pods found
	I0708 20:40:01.852537   48058 system_pods.go:89] "coredns-6d4b75cb6d-s6bz4" [9d0eb78c-aa32-4b4f-84db-fec283ac0e56] Running
	I0708 20:40:01.852543   48058 system_pods.go:89] "etcd-test-preload-309323" [879a2b51-a850-4b51-b728-4ba4be5d6bf4] Running
	I0708 20:40:01.852547   48058 system_pods.go:89] "kube-apiserver-test-preload-309323" [36db87a2-13d3-45c3-afa7-31a134d51cb7] Running
	I0708 20:40:01.852552   48058 system_pods.go:89] "kube-controller-manager-test-preload-309323" [c3dbb172-afdd-4b15-8481-7fa7dc9c4f8a] Running
	I0708 20:40:01.852556   48058 system_pods.go:89] "kube-proxy-qpjgv" [2fc18f83-0a80-4fa2-9ae9-473123cad4ed] Running
	I0708 20:40:01.852559   48058 system_pods.go:89] "kube-scheduler-test-preload-309323" [414437b3-7540-4782-ab30-0e71e55e829f] Running
	I0708 20:40:01.852563   48058 system_pods.go:89] "storage-provisioner" [20d6991d-d160-4b3a-b907-71165a101ce9] Running
	I0708 20:40:01.852571   48058 system_pods.go:126] duration metric: took 201.986996ms to wait for k8s-apps to be running ...
	I0708 20:40:01.852577   48058 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 20:40:01.852620   48058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:40:01.867980   48058 system_svc.go:56] duration metric: took 15.394874ms WaitForService to wait for kubelet
	I0708 20:40:01.868012   48058 kubeadm.go:576] duration metric: took 11.937996195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:40:01.868029   48058 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:40:02.051961   48058 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:40:02.051986   48058 node_conditions.go:123] node cpu capacity is 2
	I0708 20:40:02.051995   48058 node_conditions.go:105] duration metric: took 183.961735ms to run NodePressure ...
	I0708 20:40:02.052006   48058 start.go:240] waiting for startup goroutines ...
	I0708 20:40:02.052012   48058 start.go:245] waiting for cluster config update ...
	I0708 20:40:02.052022   48058 start.go:254] writing updated cluster config ...
	I0708 20:40:02.052268   48058 ssh_runner.go:195] Run: rm -f paused
	I0708 20:40:02.099012   48058 start.go:600] kubectl: 1.30.2, cluster: 1.24.4 (minor skew: 6)
	I0708 20:40:02.101020   48058 out.go:177] 
	W0708 20:40:02.102349   48058 out.go:239] ! /usr/local/bin/kubectl is version 1.30.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0708 20:40:02.103581   48058 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0708 20:40:02.104842   48058 out.go:177] * Done! kubectl is now configured to use "test-preload-309323" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 08 20:40:02 test-preload-309323 crio[704]: time="2024-07-08 20:40:02.980868595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720471202980846371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43756191-50e6-4d95-b323-7916183adc62 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:40:02 test-preload-309323 crio[704]: time="2024-07-08 20:40:02.981489346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d0d8107-734c-4248-9fc6-9fdf31fffae6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:02 test-preload-309323 crio[704]: time="2024-07-08 20:40:02.981539941Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d0d8107-734c-4248-9fc6-9fdf31fffae6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:02 test-preload-309323 crio[704]: time="2024-07-08 20:40:02.981699638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d393b750358cc179b8a905f0e2dec1a499da97da269726bccb6d432bb659b9ff,PodSandboxId:8419f6ebfb61eb2a6df42bad8b9cf93ffb3ce5515c6482158996914232a97024,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1720471192638245436,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-s6bz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0eb78c-aa32-4b4f-84db-fec283ac0e56,},Annotations:map[string]string{io.kubernetes.container.hash: 23c26aa7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdeece9ce92a6cb3b5657326e7d895cf041d30bf18e77b8906a93bcc5b40975,PodSandboxId:380440b35d6af2c2255c5a4576a3ba3e68a72fc5915690f779ceaa6469644540,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1720471189012206732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpjgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2fc18f83-0a80-4fa2-9ae9-473123cad4ed,},Annotations:map[string]string{io.kubernetes.container.hash: 36ce7e2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283439be412bde819be74e1904648e8e1b771b28fd31bdaf5c794bda8b94ea87,PodSandboxId:97ccedf256fdde7a160589f62eb93f4547179fe91e81e7092186917785cf93eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720471188718698799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
d6991d-d160-4b3a-b907-71165a101ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d15e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3592606c4ca78c42af507168bb6072549e637425e5c8aa11d0c0673430093f96,PodSandboxId:10ed71b3ea7e17e77c017cac0ed0de297c16971e6801839dadb3265b9cb2b459,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1720471182514064459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12393bee4a13cf27442cabdab6ef95fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc674d61dc50aec3da94a80eea21e35b57fade3d25ce294b1774c79a86bf8e75,PodSandboxId:7e914f5aff952191f5f7daca8dc1a316a276caf9f6e1c0419beb55ec032abe63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1720471182464615142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6074fe7cdac79b74d33ac3f
6360299d8,},Annotations:map[string]string{io.kubernetes.container.hash: 921d5aab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cb3455f2a519fd79d0a0f11bacc308efc28f06cc704406a8b8110440f55dda,PodSandboxId:ee0191fd9dd0c5219c529511d331ee1a5478b37506a4c293063f67fd981debb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1720471182429406925,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ecc25da3f01c329b0424d5d1db7b1c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b368e879a01932732f4697d66847eeb51f552a2549405a34a3b123e473c00cef,PodSandboxId:5fd54ee1df3969daecb6a01857095d5e733234d7d08bdcd9350af4cc3432a839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1720471182400023153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d688b9c86cdc3069d45bee60d6987b5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4ad41263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d0d8107-734c-4248-9fc6-9fdf31fffae6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.020258088Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=91273e35-2206-4839-9d40-cb3d772f07c8 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.020392203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=91273e35-2206-4839-9d40-cb3d772f07c8 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.021782649Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c76ab450-aef7-4fbb-a8ed-c14cc2056e38 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.022482271Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720471203022455967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c76ab450-aef7-4fbb-a8ed-c14cc2056e38 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.023035121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b51c813-1053-4e62-80c9-3de35a09ec02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.023215481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b51c813-1053-4e62-80c9-3de35a09ec02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.023382268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d393b750358cc179b8a905f0e2dec1a499da97da269726bccb6d432bb659b9ff,PodSandboxId:8419f6ebfb61eb2a6df42bad8b9cf93ffb3ce5515c6482158996914232a97024,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1720471192638245436,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-s6bz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0eb78c-aa32-4b4f-84db-fec283ac0e56,},Annotations:map[string]string{io.kubernetes.container.hash: 23c26aa7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdeece9ce92a6cb3b5657326e7d895cf041d30bf18e77b8906a93bcc5b40975,PodSandboxId:380440b35d6af2c2255c5a4576a3ba3e68a72fc5915690f779ceaa6469644540,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1720471189012206732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpjgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2fc18f83-0a80-4fa2-9ae9-473123cad4ed,},Annotations:map[string]string{io.kubernetes.container.hash: 36ce7e2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283439be412bde819be74e1904648e8e1b771b28fd31bdaf5c794bda8b94ea87,PodSandboxId:97ccedf256fdde7a160589f62eb93f4547179fe91e81e7092186917785cf93eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720471188718698799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
d6991d-d160-4b3a-b907-71165a101ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d15e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3592606c4ca78c42af507168bb6072549e637425e5c8aa11d0c0673430093f96,PodSandboxId:10ed71b3ea7e17e77c017cac0ed0de297c16971e6801839dadb3265b9cb2b459,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1720471182514064459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12393bee4a13cf27442cabdab6ef95fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc674d61dc50aec3da94a80eea21e35b57fade3d25ce294b1774c79a86bf8e75,PodSandboxId:7e914f5aff952191f5f7daca8dc1a316a276caf9f6e1c0419beb55ec032abe63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1720471182464615142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6074fe7cdac79b74d33ac3f
6360299d8,},Annotations:map[string]string{io.kubernetes.container.hash: 921d5aab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cb3455f2a519fd79d0a0f11bacc308efc28f06cc704406a8b8110440f55dda,PodSandboxId:ee0191fd9dd0c5219c529511d331ee1a5478b37506a4c293063f67fd981debb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1720471182429406925,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ecc25da3f01c329b0424d5d1db7b1c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b368e879a01932732f4697d66847eeb51f552a2549405a34a3b123e473c00cef,PodSandboxId:5fd54ee1df3969daecb6a01857095d5e733234d7d08bdcd9350af4cc3432a839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1720471182400023153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d688b9c86cdc3069d45bee60d6987b5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4ad41263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b51c813-1053-4e62-80c9-3de35a09ec02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.062076554Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44c44976-2564-475c-9ce3-b387cca1036f name=/runtime.v1.RuntimeService/Version
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.062205396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44c44976-2564-475c-9ce3-b387cca1036f name=/runtime.v1.RuntimeService/Version
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.063489423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edfa982b-d227-4dc3-a1ff-341a7db965b4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.063948773Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720471203063922855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edfa982b-d227-4dc3-a1ff-341a7db965b4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.064575369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06d51da4-e32f-413d-818f-0fe5be1773bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.064726503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06d51da4-e32f-413d-818f-0fe5be1773bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.064894154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d393b750358cc179b8a905f0e2dec1a499da97da269726bccb6d432bb659b9ff,PodSandboxId:8419f6ebfb61eb2a6df42bad8b9cf93ffb3ce5515c6482158996914232a97024,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1720471192638245436,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-s6bz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0eb78c-aa32-4b4f-84db-fec283ac0e56,},Annotations:map[string]string{io.kubernetes.container.hash: 23c26aa7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdeece9ce92a6cb3b5657326e7d895cf041d30bf18e77b8906a93bcc5b40975,PodSandboxId:380440b35d6af2c2255c5a4576a3ba3e68a72fc5915690f779ceaa6469644540,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1720471189012206732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpjgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2fc18f83-0a80-4fa2-9ae9-473123cad4ed,},Annotations:map[string]string{io.kubernetes.container.hash: 36ce7e2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283439be412bde819be74e1904648e8e1b771b28fd31bdaf5c794bda8b94ea87,PodSandboxId:97ccedf256fdde7a160589f62eb93f4547179fe91e81e7092186917785cf93eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720471188718698799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
d6991d-d160-4b3a-b907-71165a101ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d15e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3592606c4ca78c42af507168bb6072549e637425e5c8aa11d0c0673430093f96,PodSandboxId:10ed71b3ea7e17e77c017cac0ed0de297c16971e6801839dadb3265b9cb2b459,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1720471182514064459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12393bee4a13cf27442cabdab6ef95fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc674d61dc50aec3da94a80eea21e35b57fade3d25ce294b1774c79a86bf8e75,PodSandboxId:7e914f5aff952191f5f7daca8dc1a316a276caf9f6e1c0419beb55ec032abe63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1720471182464615142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6074fe7cdac79b74d33ac3f
6360299d8,},Annotations:map[string]string{io.kubernetes.container.hash: 921d5aab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cb3455f2a519fd79d0a0f11bacc308efc28f06cc704406a8b8110440f55dda,PodSandboxId:ee0191fd9dd0c5219c529511d331ee1a5478b37506a4c293063f67fd981debb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1720471182429406925,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ecc25da3f01c329b0424d5d1db7b1c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b368e879a01932732f4697d66847eeb51f552a2549405a34a3b123e473c00cef,PodSandboxId:5fd54ee1df3969daecb6a01857095d5e733234d7d08bdcd9350af4cc3432a839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1720471182400023153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d688b9c86cdc3069d45bee60d6987b5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4ad41263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06d51da4-e32f-413d-818f-0fe5be1773bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.101635783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=add3ff29-fc93-4318-a348-d8d001de46e9 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.101727574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=add3ff29-fc93-4318-a348-d8d001de46e9 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.103442578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a5cbb6c-6e78-43b3-8065-dae34496e041 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.103919114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720471203103894549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a5cbb6c-6e78-43b3-8065-dae34496e041 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.104551677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce829198-4110-460a-8dbd-aee41e2c4e6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.104605892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce829198-4110-460a-8dbd-aee41e2c4e6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:40:03 test-preload-309323 crio[704]: time="2024-07-08 20:40:03.104759724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d393b750358cc179b8a905f0e2dec1a499da97da269726bccb6d432bb659b9ff,PodSandboxId:8419f6ebfb61eb2a6df42bad8b9cf93ffb3ce5515c6482158996914232a97024,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1720471192638245436,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-s6bz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d0eb78c-aa32-4b4f-84db-fec283ac0e56,},Annotations:map[string]string{io.kubernetes.container.hash: 23c26aa7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdeece9ce92a6cb3b5657326e7d895cf041d30bf18e77b8906a93bcc5b40975,PodSandboxId:380440b35d6af2c2255c5a4576a3ba3e68a72fc5915690f779ceaa6469644540,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1720471189012206732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpjgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2fc18f83-0a80-4fa2-9ae9-473123cad4ed,},Annotations:map[string]string{io.kubernetes.container.hash: 36ce7e2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283439be412bde819be74e1904648e8e1b771b28fd31bdaf5c794bda8b94ea87,PodSandboxId:97ccedf256fdde7a160589f62eb93f4547179fe91e81e7092186917785cf93eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720471188718698799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
d6991d-d160-4b3a-b907-71165a101ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d15e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3592606c4ca78c42af507168bb6072549e637425e5c8aa11d0c0673430093f96,PodSandboxId:10ed71b3ea7e17e77c017cac0ed0de297c16971e6801839dadb3265b9cb2b459,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1720471182514064459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12393bee4a13cf27442cabdab6ef95fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc674d61dc50aec3da94a80eea21e35b57fade3d25ce294b1774c79a86bf8e75,PodSandboxId:7e914f5aff952191f5f7daca8dc1a316a276caf9f6e1c0419beb55ec032abe63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1720471182464615142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6074fe7cdac79b74d33ac3f
6360299d8,},Annotations:map[string]string{io.kubernetes.container.hash: 921d5aab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cb3455f2a519fd79d0a0f11bacc308efc28f06cc704406a8b8110440f55dda,PodSandboxId:ee0191fd9dd0c5219c529511d331ee1a5478b37506a4c293063f67fd981debb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1720471182429406925,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ecc25da3f01c329b0424d5d1db7b1c,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b368e879a01932732f4697d66847eeb51f552a2549405a34a3b123e473c00cef,PodSandboxId:5fd54ee1df3969daecb6a01857095d5e733234d7d08bdcd9350af4cc3432a839,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1720471182400023153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-309323,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d688b9c86cdc3069d45bee60d6987b5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4ad41263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce829198-4110-460a-8dbd-aee41e2c4e6c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d393b750358cc       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   10 seconds ago      Running             coredns                   1                   8419f6ebfb61e       coredns-6d4b75cb6d-s6bz4
	fbdeece9ce92a       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   380440b35d6af       kube-proxy-qpjgv
	283439be412bd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   97ccedf256fdd       storage-provisioner
	3592606c4ca78       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   10ed71b3ea7e1       kube-controller-manager-test-preload-309323
	fc674d61dc50a       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   7e914f5aff952       etcd-test-preload-309323
	60cb3455f2a51       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   ee0191fd9dd0c       kube-scheduler-test-preload-309323
	b368e879a0193       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   5fd54ee1df396       kube-apiserver-test-preload-309323
	
	
	==> coredns [d393b750358cc179b8a905f0e2dec1a499da97da269726bccb6d432bb659b9ff] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:33551 - 38734 "HINFO IN 4241639646760939557.6587336476436551751. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015049676s
	
	
	==> describe nodes <==
	Name:               test-preload-309323
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-309323
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=test-preload-309323
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T20_38_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 20:38:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-309323
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:39:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:39:57 +0000   Mon, 08 Jul 2024 20:38:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:39:57 +0000   Mon, 08 Jul 2024 20:38:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:39:57 +0000   Mon, 08 Jul 2024 20:38:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:39:57 +0000   Mon, 08 Jul 2024 20:39:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    test-preload-309323
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f58bc5d04353439f941d528ec5edddef
	  System UUID:                f58bc5d0-4353-439f-941d-528ec5edddef
	  Boot ID:                    a2418f21-c92f-4fa5-89e3-d9ee27e5b3f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-s6bz4                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     74s
	  kube-system                 etcd-test-preload-309323                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-test-preload-309323             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-test-preload-309323    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-qpjgv                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-scheduler-test-preload-309323             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  88s                kubelet          Node test-preload-309323 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s                kubelet          Node test-preload-309323 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                kubelet          Node test-preload-309323 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                78s                kubelet          Node test-preload-309323 status is now: NodeReady
	  Normal  RegisteredNode           75s                node-controller  Node test-preload-309323 event: Registered Node test-preload-309323 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-309323 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-309323 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-309323 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-309323 event: Registered Node test-preload-309323 in Controller
	
	
	==> dmesg <==
	[Jul 8 20:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050809] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040418] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.546304] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.332889] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.628786] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.812953] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.064305] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058437] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.169995] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.142651] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.271956] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[ +13.327885] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +0.058424] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.110418] systemd-fstab-generator[1091]: Ignoring "noauto" option for root device
	[  +4.335299] kauditd_printk_skb: 105 callbacks suppressed
	[  +4.199215] systemd-fstab-generator[1715]: Ignoring "noauto" option for root device
	[  +2.415683] kauditd_printk_skb: 53 callbacks suppressed
	[  +7.083871] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [fc674d61dc50aec3da94a80eea21e35b57fade3d25ce294b1774c79a86bf8e75] <==
	{"level":"info","ts":"2024-07-08T20:39:43.140Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"1d3fba3e6c6ecbcd","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-08T20:39:43.141Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-08T20:39:43.151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd switched to configuration voters=(2107607927902620621)"}
	{"level":"info","ts":"2024-07-08T20:39:43.151Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e01947a35a5ac2c","local-member-id":"1d3fba3e6c6ecbcd","added-peer-id":"1d3fba3e6c6ecbcd","added-peer-peer-urls":["https://192.168.39.13:2380"]}
	{"level":"info","ts":"2024-07-08T20:39:43.158Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e01947a35a5ac2c","local-member-id":"1d3fba3e6c6ecbcd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T20:39:43.158Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T20:39:43.160Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T20:39:43.162Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-07-08T20:39:43.173Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-07-08T20:39:43.175Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1d3fba3e6c6ecbcd","initial-advertise-peer-urls":["https://192.168.39.13:2380"],"listen-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T20:39:43.175Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T20:39:44.448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T20:39:44.449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T20:39:44.449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd received MsgPreVoteResp from 1d3fba3e6c6ecbcd at term 2"}
	{"level":"info","ts":"2024-07-08T20:39:44.449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became candidate at term 3"}
	{"level":"info","ts":"2024-07-08T20:39:44.449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd received MsgVoteResp from 1d3fba3e6c6ecbcd at term 3"}
	{"level":"info","ts":"2024-07-08T20:39:44.449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became leader at term 3"}
	{"level":"info","ts":"2024-07-08T20:39:44.449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1d3fba3e6c6ecbcd elected leader 1d3fba3e6c6ecbcd at term 3"}
	{"level":"info","ts":"2024-07-08T20:39:44.451Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"1d3fba3e6c6ecbcd","local-member-attributes":"{Name:test-preload-309323 ClientURLs:[https://192.168.39.13:2379]}","request-path":"/0/members/1d3fba3e6c6ecbcd/attributes","cluster-id":"1e01947a35a5ac2c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T20:39:44.451Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:39:44.452Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T20:39:44.453Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:39:44.454Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.13:2379"}
	{"level":"info","ts":"2024-07-08T20:39:44.460Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T20:39:44.460Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:40:03 up 0 min,  0 users,  load average: 0.75, 0.21, 0.07
	Linux test-preload-309323 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b368e879a01932732f4697d66847eeb51f552a2549405a34a3b123e473c00cef] <==
	I0708 20:39:46.929506       1 establishing_controller.go:76] Starting EstablishingController
	I0708 20:39:46.929545       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0708 20:39:46.929559       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0708 20:39:46.929583       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0708 20:39:46.939564       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0708 20:39:46.953362       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0708 20:39:47.024320       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0708 20:39:47.041379       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0708 20:39:47.051630       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0708 20:39:47.090350       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 20:39:47.097027       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0708 20:39:47.097117       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0708 20:39:47.098845       1 cache.go:39] Caches are synced for autoregister controller
	I0708 20:39:47.098981       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0708 20:39:47.127881       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0708 20:39:47.562282       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0708 20:39:47.931379       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0708 20:39:48.417930       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0708 20:39:48.431304       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0708 20:39:48.465855       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0708 20:39:48.485499       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 20:39:48.493027       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0708 20:39:49.351337       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0708 20:39:59.417963       1 controller.go:611] quota admission added evaluator for: endpoints
	I0708 20:39:59.449364       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3592606c4ca78c42af507168bb6072549e637425e5c8aa11d0c0673430093f96] <==
	I0708 20:39:59.397415       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0708 20:39:59.400626       1 shared_informer.go:262] Caches are synced for stateful set
	I0708 20:39:59.401835       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0708 20:39:59.401876       1 shared_informer.go:262] Caches are synced for PV protection
	I0708 20:39:59.404219       1 shared_informer.go:262] Caches are synced for persistent volume
	I0708 20:39:59.406545       1 shared_informer.go:262] Caches are synced for attach detach
	I0708 20:39:59.408939       1 shared_informer.go:262] Caches are synced for GC
	I0708 20:39:59.408983       1 shared_informer.go:262] Caches are synced for endpoint
	I0708 20:39:59.411387       1 shared_informer.go:262] Caches are synced for cronjob
	I0708 20:39:59.415593       1 shared_informer.go:262] Caches are synced for TTL
	I0708 20:39:59.418694       1 shared_informer.go:262] Caches are synced for PVC protection
	I0708 20:39:59.424425       1 shared_informer.go:262] Caches are synced for job
	I0708 20:39:59.433052       1 shared_informer.go:262] Caches are synced for ephemeral
	I0708 20:39:59.435276       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0708 20:39:59.440058       1 shared_informer.go:262] Caches are synced for namespace
	I0708 20:39:59.468717       1 shared_informer.go:262] Caches are synced for deployment
	I0708 20:39:59.507401       1 shared_informer.go:262] Caches are synced for crt configmap
	I0708 20:39:59.513458       1 shared_informer.go:262] Caches are synced for disruption
	I0708 20:39:59.513651       1 disruption.go:371] Sending events to api server.
	I0708 20:39:59.526388       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0708 20:39:59.570303       1 shared_informer.go:262] Caches are synced for resource quota
	I0708 20:39:59.615381       1 shared_informer.go:262] Caches are synced for resource quota
	I0708 20:40:00.057811       1 shared_informer.go:262] Caches are synced for garbage collector
	I0708 20:40:00.081572       1 shared_informer.go:262] Caches are synced for garbage collector
	I0708 20:40:00.081605       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [fbdeece9ce92a6cb3b5657326e7d895cf041d30bf18e77b8906a93bcc5b40975] <==
	I0708 20:39:49.302518       1 node.go:163] Successfully retrieved node IP: 192.168.39.13
	I0708 20:39:49.302672       1 server_others.go:138] "Detected node IP" address="192.168.39.13"
	I0708 20:39:49.302741       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0708 20:39:49.336846       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0708 20:39:49.336864       1 server_others.go:206] "Using iptables Proxier"
	I0708 20:39:49.336892       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0708 20:39:49.337500       1 server.go:661] "Version info" version="v1.24.4"
	I0708 20:39:49.337514       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:39:49.339117       1 config.go:317] "Starting service config controller"
	I0708 20:39:49.339367       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0708 20:39:49.339553       1 config.go:444] "Starting node config controller"
	I0708 20:39:49.339614       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0708 20:39:49.339655       1 config.go:226] "Starting endpoint slice config controller"
	I0708 20:39:49.339677       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0708 20:39:49.440609       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0708 20:39:49.440699       1 shared_informer.go:262] Caches are synced for service config
	I0708 20:39:49.440712       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [60cb3455f2a519fd79d0a0f11bacc308efc28f06cc704406a8b8110440f55dda] <==
	I0708 20:39:43.877822       1 serving.go:348] Generated self-signed cert in-memory
	W0708 20:39:46.958221       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 20:39:46.958268       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 20:39:46.958281       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 20:39:46.958289       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 20:39:47.007437       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0708 20:39:47.008430       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:39:47.017664       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0708 20:39:47.017839       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 20:39:47.017874       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 20:39:47.017895       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0708 20:39:47.038718       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:39:47.038762       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0708 20:39:47.118867       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 20:39:47 test-preload-309323 kubelet[1098]: I0708 20:39:47.871122    1098 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2fc18f83-0a80-4fa2-9ae9-473123cad4ed-kube-proxy\") pod \"kube-proxy-qpjgv\" (UID: \"2fc18f83-0a80-4fa2-9ae9-473123cad4ed\") " pod="kube-system/kube-proxy-qpjgv"
	Jul 08 20:39:47 test-preload-309323 kubelet[1098]: I0708 20:39:47.871217    1098 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fc18f83-0a80-4fa2-9ae9-473123cad4ed-lib-modules\") pod \"kube-proxy-qpjgv\" (UID: \"2fc18f83-0a80-4fa2-9ae9-473123cad4ed\") " pod="kube-system/kube-proxy-qpjgv"
	Jul 08 20:39:47 test-preload-309323 kubelet[1098]: I0708 20:39:47.871237    1098 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fc18f83-0a80-4fa2-9ae9-473123cad4ed-xtables-lock\") pod \"kube-proxy-qpjgv\" (UID: \"2fc18f83-0a80-4fa2-9ae9-473123cad4ed\") " pod="kube-system/kube-proxy-qpjgv"
	Jul 08 20:39:47 test-preload-309323 kubelet[1098]: I0708 20:39:47.871259    1098 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vjw\" (UniqueName: \"kubernetes.io/projected/2fc18f83-0a80-4fa2-9ae9-473123cad4ed-kube-api-access-f5vjw\") pod \"kube-proxy-qpjgv\" (UID: \"2fc18f83-0a80-4fa2-9ae9-473123cad4ed\") " pod="kube-system/kube-proxy-qpjgv"
	Jul 08 20:39:47 test-preload-309323 kubelet[1098]: I0708 20:39:47.871285    1098 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btw7g\" (UniqueName: \"kubernetes.io/projected/9d0eb78c-aa32-4b4f-84db-fec283ac0e56-kube-api-access-btw7g\") pod \"coredns-6d4b75cb6d-s6bz4\" (UID: \"9d0eb78c-aa32-4b4f-84db-fec283ac0e56\") " pod="kube-system/coredns-6d4b75cb6d-s6bz4"
	Jul 08 20:39:47 test-preload-309323 kubelet[1098]: I0708 20:39:47.871315    1098 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d0eb78c-aa32-4b4f-84db-fec283ac0e56-config-volume\") pod \"coredns-6d4b75cb6d-s6bz4\" (UID: \"9d0eb78c-aa32-4b4f-84db-fec283ac0e56\") " pod="kube-system/coredns-6d4b75cb6d-s6bz4"
	Jul 08 20:39:47 test-preload-309323 kubelet[1098]: I0708 20:39:47.871335    1098 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/20d6991d-d160-4b3a-b907-71165a101ce9-tmp\") pod \"storage-provisioner\" (UID: \"20d6991d-d160-4b3a-b907-71165a101ce9\") " pod="kube-system/storage-provisioner"
	Jul 08 20:39:47 test-preload-309323 kubelet[1098]: I0708 20:39:47.871352    1098 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjndd\" (UniqueName: \"kubernetes.io/projected/20d6991d-d160-4b3a-b907-71165a101ce9-kube-api-access-zjndd\") pod \"storage-provisioner\" (UID: \"20d6991d-d160-4b3a-b907-71165a101ce9\") " pod="kube-system/storage-provisioner"
	Jul 08 20:39:47 test-preload-309323 kubelet[1098]: I0708 20:39:47.871368    1098 reconciler.go:159] "Reconciler: start to sync state"
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: I0708 20:39:48.289224    1098 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7xcl\" (UniqueName: \"kubernetes.io/projected/201511e1-16ae-4d46-8617-e169c95cdbc0-kube-api-access-g7xcl\") pod \"201511e1-16ae-4d46-8617-e169c95cdbc0\" (UID: \"201511e1-16ae-4d46-8617-e169c95cdbc0\") "
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: I0708 20:39:48.289270    1098 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/201511e1-16ae-4d46-8617-e169c95cdbc0-config-volume\") pod \"201511e1-16ae-4d46-8617-e169c95cdbc0\" (UID: \"201511e1-16ae-4d46-8617-e169c95cdbc0\") "
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: E0708 20:39:48.289853    1098 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: E0708 20:39:48.289922    1098 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9d0eb78c-aa32-4b4f-84db-fec283ac0e56-config-volume podName:9d0eb78c-aa32-4b4f-84db-fec283ac0e56 nodeName:}" failed. No retries permitted until 2024-07-08 20:39:48.789891298 +0000 UTC m=+7.225069437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9d0eb78c-aa32-4b4f-84db-fec283ac0e56-config-volume") pod "coredns-6d4b75cb6d-s6bz4" (UID: "9d0eb78c-aa32-4b4f-84db-fec283ac0e56") : object "kube-system"/"coredns" not registered
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: W0708 20:39:48.290973    1098 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/201511e1-16ae-4d46-8617-e169c95cdbc0/volumes/kubernetes.io~projected/kube-api-access-g7xcl: clearQuota called, but quotas disabled
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: W0708 20:39:48.291745    1098 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/201511e1-16ae-4d46-8617-e169c95cdbc0/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: I0708 20:39:48.291906    1098 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201511e1-16ae-4d46-8617-e169c95cdbc0-kube-api-access-g7xcl" (OuterVolumeSpecName: "kube-api-access-g7xcl") pod "201511e1-16ae-4d46-8617-e169c95cdbc0" (UID: "201511e1-16ae-4d46-8617-e169c95cdbc0"). InnerVolumeSpecName "kube-api-access-g7xcl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: I0708 20:39:48.292282    1098 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/201511e1-16ae-4d46-8617-e169c95cdbc0-config-volume" (OuterVolumeSpecName: "config-volume") pod "201511e1-16ae-4d46-8617-e169c95cdbc0" (UID: "201511e1-16ae-4d46-8617-e169c95cdbc0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: I0708 20:39:48.390625    1098 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/201511e1-16ae-4d46-8617-e169c95cdbc0-config-volume\") on node \"test-preload-309323\" DevicePath \"\""
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: I0708 20:39:48.390666    1098 reconciler.go:384] "Volume detached for volume \"kube-api-access-g7xcl\" (UniqueName: \"kubernetes.io/projected/201511e1-16ae-4d46-8617-e169c95cdbc0-kube-api-access-g7xcl\") on node \"test-preload-309323\" DevicePath \"\""
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: E0708 20:39:48.794264    1098 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 08 20:39:48 test-preload-309323 kubelet[1098]: E0708 20:39:48.794330    1098 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9d0eb78c-aa32-4b4f-84db-fec283ac0e56-config-volume podName:9d0eb78c-aa32-4b4f-84db-fec283ac0e56 nodeName:}" failed. No retries permitted until 2024-07-08 20:39:49.794316292 +0000 UTC m=+8.229494432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9d0eb78c-aa32-4b4f-84db-fec283ac0e56-config-volume") pod "coredns-6d4b75cb6d-s6bz4" (UID: "9d0eb78c-aa32-4b4f-84db-fec283ac0e56") : object "kube-system"/"coredns" not registered
	Jul 08 20:39:49 test-preload-309323 kubelet[1098]: E0708 20:39:49.803815    1098 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 08 20:39:49 test-preload-309323 kubelet[1098]: E0708 20:39:49.803897    1098 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9d0eb78c-aa32-4b4f-84db-fec283ac0e56-config-volume podName:9d0eb78c-aa32-4b4f-84db-fec283ac0e56 nodeName:}" failed. No retries permitted until 2024-07-08 20:39:51.803883542 +0000 UTC m=+10.239061669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9d0eb78c-aa32-4b4f-84db-fec283ac0e56-config-volume") pod "coredns-6d4b75cb6d-s6bz4" (UID: "9d0eb78c-aa32-4b4f-84db-fec283ac0e56") : object "kube-system"/"coredns" not registered
	Jul 08 20:39:49 test-preload-309323 kubelet[1098]: E0708 20:39:49.808550    1098 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-s6bz4" podUID=9d0eb78c-aa32-4b4f-84db-fec283ac0e56
	Jul 08 20:39:49 test-preload-309323 kubelet[1098]: I0708 20:39:49.813574    1098 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=201511e1-16ae-4d46-8617-e169c95cdbc0 path="/var/lib/kubelet/pods/201511e1-16ae-4d46-8617-e169c95cdbc0/volumes"
	
	
	==> storage-provisioner [283439be412bde819be74e1904648e8e1b771b28fd31bdaf5c794bda8b94ea87] <==
	I0708 20:39:48.796655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-309323 -n test-preload-309323
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-309323 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-309323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-309323
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-309323: (1.143324761s)
--- FAIL: TestPreload (168.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (375.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-467273 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0708 21:12:19.105863   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:19.111161   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:19.121480   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:19.141834   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:19.182147   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:19.262529   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:19.422970   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:19.743895   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:20.385041   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:21.665284   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:24.226317   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:29.347094   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:12:39.587615   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:13:00.067902   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:13:41.029079   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:14:23.844308   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 21:15:02.950076   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:16:29.733030   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-467273 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.863731787s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-467273] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19195
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-467273" primary control-plane node in "kubernetes-upgrade-467273" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 21:12:03.051179   64608 out.go:291] Setting OutFile to fd 1 ...
	I0708 21:12:03.051319   64608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 21:12:03.051330   64608 out.go:304] Setting ErrFile to fd 2...
	I0708 21:12:03.051337   64608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 21:12:03.051660   64608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 21:12:03.052416   64608 out.go:298] Setting JSON to false
	I0708 21:12:03.053629   64608 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6872,"bootTime":1720466251,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 21:12:03.053705   64608 start.go:139] virtualization: kvm guest
	I0708 21:12:03.056171   64608 out.go:177] * [kubernetes-upgrade-467273] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 21:12:03.057769   64608 notify.go:220] Checking for updates...
	I0708 21:12:03.057814   64608 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 21:12:03.059247   64608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 21:12:03.060660   64608 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:12:03.061978   64608 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:12:03.063112   64608 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 21:12:03.064243   64608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 21:12:03.065907   64608 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:12:03.066021   64608 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:12:03.066124   64608 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:12:03.066219   64608 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 21:12:03.105168   64608 out.go:177] * Using the kvm2 driver based on user configuration
	I0708 21:12:03.106506   64608 start.go:297] selected driver: kvm2
	I0708 21:12:03.106526   64608 start.go:901] validating driver "kvm2" against <nil>
	I0708 21:12:03.106542   64608 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 21:12:03.107564   64608 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 21:12:03.107655   64608 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 21:12:03.124567   64608 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 21:12:03.124616   64608 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 21:12:03.124829   64608 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 21:12:03.124909   64608 cni.go:84] Creating CNI manager for ""
	I0708 21:12:03.124937   64608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:12:03.124958   64608 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 21:12:03.125022   64608 start.go:340] cluster config:
	{Name:kubernetes-upgrade-467273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 21:12:03.125115   64608 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 21:12:03.127140   64608 out.go:177] * Starting "kubernetes-upgrade-467273" primary control-plane node in "kubernetes-upgrade-467273" cluster
	I0708 21:12:03.128458   64608 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0708 21:12:03.128503   64608 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0708 21:12:03.128513   64608 cache.go:56] Caching tarball of preloaded images
	I0708 21:12:03.128600   64608 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 21:12:03.128613   64608 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0708 21:12:03.128736   64608 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/config.json ...
	I0708 21:12:03.128758   64608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/config.json: {Name:mk857e887319f0318ed6d9d6984f55759745dc72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:12:03.128949   64608 start.go:360] acquireMachinesLock for kubernetes-upgrade-467273: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 21:12:03.128999   64608 start.go:364] duration metric: took 22.196µs to acquireMachinesLock for "kubernetes-upgrade-467273"
	I0708 21:12:03.129025   64608 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-467273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:12:03.129106   64608 start.go:125] createHost starting for "" (driver="kvm2")
	I0708 21:12:03.130851   64608 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 21:12:03.131043   64608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:12:03.131094   64608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:12:03.146299   64608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42053
	I0708 21:12:03.146775   64608 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:12:03.147359   64608 main.go:141] libmachine: Using API Version  1
	I0708 21:12:03.147379   64608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:12:03.147722   64608 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:12:03.147905   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetMachineName
	I0708 21:12:03.148112   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:12:03.148261   64608 start.go:159] libmachine.API.Create for "kubernetes-upgrade-467273" (driver="kvm2")
	I0708 21:12:03.148287   64608 client.go:168] LocalClient.Create starting
	I0708 21:12:03.148323   64608 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 21:12:03.148355   64608 main.go:141] libmachine: Decoding PEM data...
	I0708 21:12:03.148368   64608 main.go:141] libmachine: Parsing certificate...
	I0708 21:12:03.148419   64608 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 21:12:03.148436   64608 main.go:141] libmachine: Decoding PEM data...
	I0708 21:12:03.148452   64608 main.go:141] libmachine: Parsing certificate...
	I0708 21:12:03.148469   64608 main.go:141] libmachine: Running pre-create checks...
	I0708 21:12:03.148478   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .PreCreateCheck
	I0708 21:12:03.148820   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetConfigRaw
	I0708 21:12:03.149243   64608 main.go:141] libmachine: Creating machine...
	I0708 21:12:03.149257   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .Create
	I0708 21:12:03.149381   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Creating KVM machine...
	I0708 21:12:03.150607   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found existing default KVM network
	I0708 21:12:03.151859   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:03.151699   64647 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:04:00:c2} reservation:<nil>}
	I0708 21:12:03.153330   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:03.153207   64647 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002b40a0}
	I0708 21:12:03.153358   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | created network xml: 
	I0708 21:12:03.153371   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | <network>
	I0708 21:12:03.153386   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG |   <name>mk-kubernetes-upgrade-467273</name>
	I0708 21:12:03.153401   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG |   <dns enable='no'/>
	I0708 21:12:03.153409   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG |   
	I0708 21:12:03.153420   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0708 21:12:03.153429   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG |     <dhcp>
	I0708 21:12:03.153444   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0708 21:12:03.153455   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG |     </dhcp>
	I0708 21:12:03.153467   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG |   </ip>
	I0708 21:12:03.153477   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG |   
	I0708 21:12:03.153489   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | </network>
	I0708 21:12:03.153499   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | 
	I0708 21:12:03.159126   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | trying to create private KVM network mk-kubernetes-upgrade-467273 192.168.50.0/24...
	I0708 21:12:03.238147   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | private KVM network mk-kubernetes-upgrade-467273 192.168.50.0/24 created
	I0708 21:12:03.238188   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273 ...
	I0708 21:12:03.238214   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:03.238091   64647 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:12:03.238236   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 21:12:03.238260   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 21:12:03.473792   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:03.473666   64647 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa...
	I0708 21:12:04.078256   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:04.078111   64647 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/kubernetes-upgrade-467273.rawdisk...
	I0708 21:12:04.078328   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Writing magic tar header
	I0708 21:12:04.078348   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Writing SSH key tar header
	I0708 21:12:04.078362   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:04.078244   64647 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273 ...
	I0708 21:12:04.078375   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273 (perms=drwx------)
	I0708 21:12:04.078406   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273
	I0708 21:12:04.078431   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 21:12:04.078442   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 21:12:04.078455   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:12:04.078468   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 21:12:04.078488   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 21:12:04.078500   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Checking permissions on dir: /home/jenkins
	I0708 21:12:04.078509   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Checking permissions on dir: /home
	I0708 21:12:04.078521   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Skipping /home - not owner
	I0708 21:12:04.078532   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 21:12:04.078543   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 21:12:04.078556   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 21:12:04.078638   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 21:12:04.078666   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Creating domain...
	I0708 21:12:04.079834   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) define libvirt domain using xml: 
	I0708 21:12:04.079869   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) <domain type='kvm'>
	I0708 21:12:04.079882   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   <name>kubernetes-upgrade-467273</name>
	I0708 21:12:04.079894   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   <memory unit='MiB'>2200</memory>
	I0708 21:12:04.079903   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   <vcpu>2</vcpu>
	I0708 21:12:04.079909   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   <features>
	I0708 21:12:04.079920   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <acpi/>
	I0708 21:12:04.079927   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <apic/>
	I0708 21:12:04.079944   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <pae/>
	I0708 21:12:04.079954   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     
	I0708 21:12:04.079978   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   </features>
	I0708 21:12:04.079996   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   <cpu mode='host-passthrough'>
	I0708 21:12:04.080002   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   
	I0708 21:12:04.080008   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   </cpu>
	I0708 21:12:04.080015   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   <os>
	I0708 21:12:04.080021   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <type>hvm</type>
	I0708 21:12:04.080029   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <boot dev='cdrom'/>
	I0708 21:12:04.080034   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <boot dev='hd'/>
	I0708 21:12:04.080041   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <bootmenu enable='no'/>
	I0708 21:12:04.080045   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   </os>
	I0708 21:12:04.080053   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   <devices>
	I0708 21:12:04.080059   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <disk type='file' device='cdrom'>
	I0708 21:12:04.080086   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/boot2docker.iso'/>
	I0708 21:12:04.080117   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <target dev='hdc' bus='scsi'/>
	I0708 21:12:04.080130   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <readonly/>
	I0708 21:12:04.080140   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     </disk>
	I0708 21:12:04.080153   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <disk type='file' device='disk'>
	I0708 21:12:04.080162   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 21:12:04.080179   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/kubernetes-upgrade-467273.rawdisk'/>
	I0708 21:12:04.080191   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <target dev='hda' bus='virtio'/>
	I0708 21:12:04.080204   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     </disk>
	I0708 21:12:04.080211   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <interface type='network'>
	I0708 21:12:04.080235   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <source network='mk-kubernetes-upgrade-467273'/>
	I0708 21:12:04.080254   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <model type='virtio'/>
	I0708 21:12:04.080267   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     </interface>
	I0708 21:12:04.080279   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <interface type='network'>
	I0708 21:12:04.080292   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <source network='default'/>
	I0708 21:12:04.080302   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <model type='virtio'/>
	I0708 21:12:04.080312   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     </interface>
	I0708 21:12:04.080327   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <serial type='pty'>
	I0708 21:12:04.080340   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <target port='0'/>
	I0708 21:12:04.080351   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     </serial>
	I0708 21:12:04.080363   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <console type='pty'>
	I0708 21:12:04.080374   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <target type='serial' port='0'/>
	I0708 21:12:04.080385   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     </console>
	I0708 21:12:04.080400   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     <rng model='virtio'>
	I0708 21:12:04.080411   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)       <backend model='random'>/dev/random</backend>
	I0708 21:12:04.080426   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     </rng>
	I0708 21:12:04.080437   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     
	I0708 21:12:04.080447   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)     
	I0708 21:12:04.080457   64608 main.go:141] libmachine: (kubernetes-upgrade-467273)   </devices>
	I0708 21:12:04.080470   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) </domain>
	I0708 21:12:04.080484   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) 
	I0708 21:12:04.085130   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:31:35:95 in network default
	I0708 21:12:04.085684   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Ensuring networks are active...
	I0708 21:12:04.085735   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:04.086506   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Ensuring network default is active
	I0708 21:12:04.086826   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Ensuring network mk-kubernetes-upgrade-467273 is active
	I0708 21:12:04.087418   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Getting domain xml...
	I0708 21:12:04.088227   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Creating domain...
	I0708 21:12:05.396328   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Waiting to get IP...
	I0708 21:12:05.397479   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:05.397984   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:05.398011   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:05.397960   64647 retry.go:31] will retry after 256.959553ms: waiting for machine to come up
	I0708 21:12:05.656609   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:05.657199   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:05.657224   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:05.657118   64647 retry.go:31] will retry after 382.330316ms: waiting for machine to come up
	I0708 21:12:06.040680   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:06.041249   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:06.041274   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:06.041227   64647 retry.go:31] will retry after 442.093889ms: waiting for machine to come up
	I0708 21:12:06.484509   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:06.484956   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:06.484986   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:06.484923   64647 retry.go:31] will retry after 423.485781ms: waiting for machine to come up
	I0708 21:12:06.909418   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:06.909910   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:06.909942   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:06.909863   64647 retry.go:31] will retry after 646.732553ms: waiting for machine to come up
	I0708 21:12:07.558949   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:07.559549   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:07.559572   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:07.559523   64647 retry.go:31] will retry after 817.837182ms: waiting for machine to come up
	I0708 21:12:08.378900   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:08.379441   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:08.379481   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:08.379359   64647 retry.go:31] will retry after 1.053998235s: waiting for machine to come up
	I0708 21:12:09.434498   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:09.434923   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:09.434978   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:09.434895   64647 retry.go:31] will retry after 1.03619925s: waiting for machine to come up
	I0708 21:12:10.472860   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:10.473344   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:10.473378   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:10.473305   64647 retry.go:31] will retry after 1.373530207s: waiting for machine to come up
	I0708 21:12:11.848526   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:11.849003   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:11.849031   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:11.848961   64647 retry.go:31] will retry after 1.597421188s: waiting for machine to come up
	I0708 21:12:13.448109   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:13.448581   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:13.448611   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:13.448556   64647 retry.go:31] will retry after 2.161450196s: waiting for machine to come up
	I0708 21:12:15.612488   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:15.613154   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:15.613194   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:15.613076   64647 retry.go:31] will retry after 2.80307317s: waiting for machine to come up
	I0708 21:12:18.418316   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:18.418781   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:18.418810   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:18.418733   64647 retry.go:31] will retry after 3.858894817s: waiting for machine to come up
	I0708 21:12:22.281557   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:22.281982   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:12:22.282008   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:12:22.281928   64647 retry.go:31] will retry after 5.259482687s: waiting for machine to come up
	I0708 21:12:27.543265   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.543911   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Found IP for machine: 192.168.50.94
	I0708 21:12:27.543956   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Reserving static IP address...
	I0708 21:12:27.543978   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has current primary IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.544419   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-467273", mac: "52:54:00:16:6e:d6", ip: "192.168.50.94"} in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.626666   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Getting to WaitForSSH function...
	I0708 21:12:27.626702   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Reserved static IP address: 192.168.50.94
	I0708 21:12:27.626750   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Waiting for SSH to be available...
	I0708 21:12:27.629727   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.630209   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:minikube Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:27.630239   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.630381   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Using SSH client type: external
	I0708 21:12:27.630404   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa (-rw-------)
	I0708 21:12:27.630458   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 21:12:27.630477   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | About to run SSH command:
	I0708 21:12:27.630505   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | exit 0
	I0708 21:12:27.761388   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | SSH cmd err, output: <nil>: 
	I0708 21:12:27.761702   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) KVM machine creation complete!
	I0708 21:12:27.761988   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetConfigRaw
	I0708 21:12:27.762544   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:12:27.762755   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:12:27.762923   64608 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 21:12:27.762946   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetState
	I0708 21:12:27.764422   64608 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 21:12:27.764440   64608 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 21:12:27.764448   64608 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 21:12:27.764457   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:27.767334   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.767747   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:27.767788   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.767967   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:12:27.768146   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:27.768328   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:27.768470   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:12:27.768666   64608 main.go:141] libmachine: Using SSH client type: native
	I0708 21:12:27.768866   64608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:12:27.768881   64608 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 21:12:27.875757   64608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 21:12:27.875777   64608 main.go:141] libmachine: Detecting the provisioner...
	I0708 21:12:27.875784   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:27.878697   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.879063   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:27.879093   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.879285   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:12:27.879524   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:27.879714   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:27.879818   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:12:27.880012   64608 main.go:141] libmachine: Using SSH client type: native
	I0708 21:12:27.880209   64608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:12:27.880223   64608 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 21:12:27.989596   64608 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 21:12:27.989670   64608 main.go:141] libmachine: found compatible host: buildroot
	I0708 21:12:27.989678   64608 main.go:141] libmachine: Provisioning with buildroot...
	I0708 21:12:27.989690   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetMachineName
	I0708 21:12:27.989954   64608 buildroot.go:166] provisioning hostname "kubernetes-upgrade-467273"
	I0708 21:12:27.989982   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetMachineName
	I0708 21:12:27.990126   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:27.993024   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.993457   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:27.993491   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:27.993712   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:12:27.993965   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:27.994131   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:27.994333   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:12:27.994534   64608 main.go:141] libmachine: Using SSH client type: native
	I0708 21:12:27.994747   64608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:12:27.994763   64608 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-467273 && echo "kubernetes-upgrade-467273" | sudo tee /etc/hostname
	I0708 21:12:28.126713   64608 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-467273
	
	I0708 21:12:28.126741   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:28.130177   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.130613   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:28.130642   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.130777   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:12:28.131000   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:28.131190   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:28.131360   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:12:28.131584   64608 main.go:141] libmachine: Using SSH client type: native
	I0708 21:12:28.131789   64608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:12:28.131826   64608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-467273' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-467273/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-467273' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 21:12:28.248976   64608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 21:12:28.249010   64608 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 21:12:28.249063   64608 buildroot.go:174] setting up certificates
	I0708 21:12:28.249081   64608 provision.go:84] configureAuth start
	I0708 21:12:28.249102   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetMachineName
	I0708 21:12:28.249439   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetIP
	I0708 21:12:28.252103   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.252500   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:28.252551   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.252715   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:28.255079   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.255534   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:28.255565   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.255697   64608 provision.go:143] copyHostCerts
	I0708 21:12:28.255757   64608 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 21:12:28.255770   64608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 21:12:28.255856   64608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 21:12:28.255968   64608 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 21:12:28.255979   64608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 21:12:28.256016   64608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 21:12:28.256086   64608 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 21:12:28.256097   64608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 21:12:28.256128   64608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 21:12:28.256203   64608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-467273 san=[127.0.0.1 192.168.50.94 kubernetes-upgrade-467273 localhost minikube]
	I0708 21:12:28.370184   64608 provision.go:177] copyRemoteCerts
	I0708 21:12:28.370236   64608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 21:12:28.370256   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:28.373171   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.373591   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:28.373632   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.373851   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:12:28.374051   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:28.374223   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:12:28.374409   64608 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa Username:docker}
	I0708 21:12:28.460168   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 21:12:28.488316   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0708 21:12:28.515509   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 21:12:28.543467   64608 provision.go:87] duration metric: took 294.352879ms to configureAuth
	I0708 21:12:28.543520   64608 buildroot.go:189] setting minikube options for container-runtime
	I0708 21:12:28.543723   64608 config.go:182] Loaded profile config "kubernetes-upgrade-467273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0708 21:12:28.543799   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:28.547055   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.547572   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:28.547602   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.547880   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:12:28.548097   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:28.548274   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:28.548441   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:12:28.548630   64608 main.go:141] libmachine: Using SSH client type: native
	I0708 21:12:28.548795   64608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:12:28.548808   64608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 21:12:28.849502   64608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 21:12:28.849533   64608 main.go:141] libmachine: Checking connection to Docker...
	I0708 21:12:28.849543   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetURL
	I0708 21:12:28.850872   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | Using libvirt version 6000000
	I0708 21:12:28.853374   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.853744   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:28.853775   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.854014   64608 main.go:141] libmachine: Docker is up and running!
	I0708 21:12:28.854028   64608 main.go:141] libmachine: Reticulating splines...
	I0708 21:12:28.854035   64608 client.go:171] duration metric: took 25.705738907s to LocalClient.Create
	I0708 21:12:28.854057   64608 start.go:167] duration metric: took 25.705797499s to libmachine.API.Create "kubernetes-upgrade-467273"
	I0708 21:12:28.854081   64608 start.go:293] postStartSetup for "kubernetes-upgrade-467273" (driver="kvm2")
	I0708 21:12:28.854097   64608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 21:12:28.854119   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:12:28.854372   64608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 21:12:28.854393   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:28.856657   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.856970   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:28.856999   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.857155   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:12:28.857332   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:28.857486   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:12:28.857656   64608 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa Username:docker}
	I0708 21:12:28.947324   64608 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 21:12:28.952294   64608 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 21:12:28.952326   64608 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 21:12:28.952389   64608 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 21:12:28.952462   64608 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 21:12:28.952597   64608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 21:12:28.963767   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 21:12:28.990004   64608 start.go:296] duration metric: took 135.905665ms for postStartSetup
	I0708 21:12:28.990062   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetConfigRaw
	I0708 21:12:28.990720   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetIP
	I0708 21:12:28.993798   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.994172   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:28.994205   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.994437   64608 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/config.json ...
	I0708 21:12:28.994670   64608 start.go:128] duration metric: took 25.865548144s to createHost
	I0708 21:12:28.994700   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:28.996829   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.997232   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:28.997253   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:28.997401   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:12:28.997589   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:28.997773   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:28.997912   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:12:28.998058   64608 main.go:141] libmachine: Using SSH client type: native
	I0708 21:12:28.998231   64608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:12:28.998248   64608 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0708 21:12:29.113168   64608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720473149.069154183
	
	I0708 21:12:29.113197   64608 fix.go:216] guest clock: 1720473149.069154183
	I0708 21:12:29.113208   64608 fix.go:229] Guest: 2024-07-08 21:12:29.069154183 +0000 UTC Remote: 2024-07-08 21:12:28.994685956 +0000 UTC m=+25.980046661 (delta=74.468227ms)
	I0708 21:12:29.113238   64608 fix.go:200] guest clock delta is within tolerance: 74.468227ms
	I0708 21:12:29.113246   64608 start.go:83] releasing machines lock for "kubernetes-upgrade-467273", held for 25.984234437s
	I0708 21:12:29.113277   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:12:29.113587   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetIP
	I0708 21:12:29.116590   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:29.116999   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:29.117051   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:29.117221   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:12:29.117805   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:12:29.118041   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:12:29.118129   64608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 21:12:29.118167   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:29.118258   64608 ssh_runner.go:195] Run: cat /version.json
	I0708 21:12:29.118278   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:12:29.121006   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:29.121037   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:29.121440   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:29.121472   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:29.121505   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:29.121523   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:29.121686   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:12:29.121792   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:12:29.121902   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:29.121976   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:12:29.122108   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:12:29.122145   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:12:29.122294   64608 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa Username:docker}
	I0708 21:12:29.122300   64608 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa Username:docker}
	I0708 21:12:29.235923   64608 ssh_runner.go:195] Run: systemctl --version
	I0708 21:12:29.242653   64608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 21:12:29.409102   64608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 21:12:29.415766   64608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 21:12:29.415867   64608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 21:12:29.436733   64608 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 21:12:29.436762   64608 start.go:494] detecting cgroup driver to use...
	I0708 21:12:29.436828   64608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 21:12:29.454733   64608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 21:12:29.469320   64608 docker.go:217] disabling cri-docker service (if available) ...
	I0708 21:12:29.469373   64608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 21:12:29.484480   64608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 21:12:29.499805   64608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 21:12:29.633119   64608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 21:12:29.794178   64608 docker.go:233] disabling docker service ...
	I0708 21:12:29.794292   64608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 21:12:29.809366   64608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 21:12:29.824853   64608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 21:12:29.949374   64608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 21:12:30.070261   64608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 21:12:30.084935   64608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 21:12:30.106034   64608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0708 21:12:30.106114   64608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:12:30.117445   64608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 21:12:30.117522   64608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:12:30.129396   64608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:12:30.142247   64608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:12:30.154318   64608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 21:12:30.166034   64608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 21:12:30.175972   64608 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 21:12:30.176032   64608 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 21:12:30.189710   64608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 21:12:30.199701   64608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:12:30.334961   64608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 21:12:30.488971   64608 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 21:12:30.489036   64608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 21:12:30.494381   64608 start.go:562] Will wait 60s for crictl version
	I0708 21:12:30.494445   64608 ssh_runner.go:195] Run: which crictl
	I0708 21:12:30.499021   64608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 21:12:30.547267   64608 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 21:12:30.547377   64608 ssh_runner.go:195] Run: crio --version
	I0708 21:12:30.579497   64608 ssh_runner.go:195] Run: crio --version
	I0708 21:12:30.618761   64608 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0708 21:12:30.620557   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetIP
	I0708 21:12:30.623802   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:30.624302   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:12:18 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:12:30.624332   64608 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:12:30.624631   64608 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0708 21:12:30.629803   64608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 21:12:30.647172   64608 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-467273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 21:12:30.647290   64608 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0708 21:12:30.647348   64608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 21:12:30.688539   64608 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0708 21:12:30.688607   64608 ssh_runner.go:195] Run: which lz4
	I0708 21:12:30.693402   64608 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0708 21:12:30.698242   64608 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 21:12:30.698278   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0708 21:12:32.544664   64608 crio.go:462] duration metric: took 1.851310494s to copy over tarball
	I0708 21:12:32.544754   64608 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 21:12:35.272792   64608 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.728000319s)
	I0708 21:12:35.272824   64608 crio.go:469] duration metric: took 2.72812545s to extract the tarball
	I0708 21:12:35.272834   64608 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 21:12:35.319067   64608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 21:12:35.379034   64608 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0708 21:12:35.379061   64608 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 21:12:35.379138   64608 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:12:35.379181   64608 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0708 21:12:35.379201   64608 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0708 21:12:35.379246   64608 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0708 21:12:35.379389   64608 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0708 21:12:35.379410   64608 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0708 21:12:35.379192   64608 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 21:12:35.379437   64608 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0708 21:12:35.380615   64608 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0708 21:12:35.380976   64608 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 21:12:35.381053   64608 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:12:35.380983   64608 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0708 21:12:35.381064   64608 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0708 21:12:35.380983   64608 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0708 21:12:35.381133   64608 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0708 21:12:35.381140   64608 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0708 21:12:35.555803   64608 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0708 21:12:35.574837   64608 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0708 21:12:35.578802   64608 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0708 21:12:35.591432   64608 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0708 21:12:35.593084   64608 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0708 21:12:35.611049   64608 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0708 21:12:35.643467   64608 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0708 21:12:35.643519   64608 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0708 21:12:35.643566   64608 ssh_runner.go:195] Run: which crictl
	I0708 21:12:35.690472   64608 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:12:35.696179   64608 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0708 21:12:35.696228   64608 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0708 21:12:35.696276   64608 ssh_runner.go:195] Run: which crictl
	I0708 21:12:35.719064   64608 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0708 21:12:35.719124   64608 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0708 21:12:35.719214   64608 ssh_runner.go:195] Run: which crictl
	I0708 21:12:35.750753   64608 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0708 21:12:35.750796   64608 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0708 21:12:35.750766   64608 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0708 21:12:35.750854   64608 ssh_runner.go:195] Run: which crictl
	I0708 21:12:35.750861   64608 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0708 21:12:35.750901   64608 ssh_runner.go:195] Run: which crictl
	I0708 21:12:35.764095   64608 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 21:12:35.783206   64608 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0708 21:12:35.783252   64608 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0708 21:12:35.783263   64608 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0708 21:12:35.783279   64608 ssh_runner.go:195] Run: which crictl
	I0708 21:12:35.912399   64608 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0708 21:12:35.912450   64608 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0708 21:12:35.912486   64608 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0708 21:12:35.912509   64608 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0708 21:12:35.912605   64608 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0708 21:12:35.912639   64608 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 21:12:35.912645   64608 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0708 21:12:35.912673   64608 ssh_runner.go:195] Run: which crictl
	I0708 21:12:35.912694   64608 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0708 21:12:36.012900   64608 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0708 21:12:36.013090   64608 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0708 21:12:36.033916   64608 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0708 21:12:36.033956   64608 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0708 21:12:36.034033   64608 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0708 21:12:36.034147   64608 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 21:12:36.071623   64608 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0708 21:12:36.071699   64608 cache_images.go:92] duration metric: took 692.623981ms to LoadCachedImages
	W0708 21:12:36.071781   64608 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0708 21:12:36.071800   64608 kubeadm.go:928] updating node { 192.168.50.94 8443 v1.20.0 crio true true} ...
	I0708 21:12:36.071943   64608 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-467273 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 21:12:36.072021   64608 ssh_runner.go:195] Run: crio config
	I0708 21:12:36.127154   64608 cni.go:84] Creating CNI manager for ""
	I0708 21:12:36.127188   64608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:12:36.127199   64608 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 21:12:36.127224   64608 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.94 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-467273 NodeName:kubernetes-upgrade-467273 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0708 21:12:36.127404   64608 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-467273"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 21:12:36.127507   64608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0708 21:12:36.137998   64608 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 21:12:36.138076   64608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 21:12:36.148261   64608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0708 21:12:36.168082   64608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 21:12:36.186770   64608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0708 21:12:36.210590   64608 ssh_runner.go:195] Run: grep 192.168.50.94	control-plane.minikube.internal$ /etc/hosts
	I0708 21:12:36.214882   64608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 21:12:36.228771   64608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:12:36.361330   64608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:12:36.382225   64608 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273 for IP: 192.168.50.94
	I0708 21:12:36.382262   64608 certs.go:194] generating shared ca certs ...
	I0708 21:12:36.382284   64608 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:12:36.382456   64608 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 21:12:36.382522   64608 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 21:12:36.382533   64608 certs.go:256] generating profile certs ...
	I0708 21:12:36.382586   64608 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/client.key
	I0708 21:12:36.382606   64608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/client.crt with IP's: []
	I0708 21:12:36.710548   64608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/client.crt ...
	I0708 21:12:36.710581   64608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/client.crt: {Name:mk1f23b0a8f18505bee1aca4d86a8d0b46645255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:12:36.710769   64608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/client.key ...
	I0708 21:12:36.710786   64608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/client.key: {Name:mk54c0f74b9e477d9339598b8b0625d36ccb4c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:12:36.710886   64608 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.key.2cb56847
	I0708 21:12:36.710902   64608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.crt.2cb56847 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.94]
	I0708 21:12:36.995019   64608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.crt.2cb56847 ...
	I0708 21:12:36.995063   64608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.crt.2cb56847: {Name:mk4e64ab1ba6061d10f565ec6b89dc3cea6ef4da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:12:36.995272   64608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.key.2cb56847 ...
	I0708 21:12:36.995295   64608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.key.2cb56847: {Name:mkb5d9096b9911e438018f8cc065e5270e96070d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:12:36.995398   64608 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.crt.2cb56847 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.crt
	I0708 21:12:36.995522   64608 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.key.2cb56847 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.key
	I0708 21:12:36.995609   64608 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.key
	I0708 21:12:36.995632   64608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.crt with IP's: []
	I0708 21:12:37.151557   64608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.crt ...
	I0708 21:12:37.151589   64608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.crt: {Name:mke82e9f6d2e6784b18793625c1e3e9bc1e5f251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:12:37.151778   64608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.key ...
	I0708 21:12:37.151795   64608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.key: {Name:mkb0b00d22553e547e46de9152dcf7b95440b967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:12:37.152004   64608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 21:12:37.152052   64608 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 21:12:37.152067   64608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 21:12:37.152099   64608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 21:12:37.152137   64608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 21:12:37.152169   64608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 21:12:37.152227   64608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 21:12:37.152793   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 21:12:37.185755   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 21:12:37.217266   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 21:12:37.263382   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 21:12:37.290433   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0708 21:12:37.315706   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 21:12:37.343974   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 21:12:37.372756   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 21:12:37.399393   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 21:12:37.426806   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 21:12:37.455319   64608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 21:12:37.485887   64608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 21:12:37.506217   64608 ssh_runner.go:195] Run: openssl version
	I0708 21:12:37.512862   64608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 21:12:37.524847   64608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 21:12:37.530070   64608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 21:12:37.530126   64608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 21:12:37.536392   64608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 21:12:37.548354   64608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 21:12:37.559901   64608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:12:37.565365   64608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:12:37.565433   64608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:12:37.572291   64608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 21:12:37.584851   64608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 21:12:37.597873   64608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 21:12:37.603158   64608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 21:12:37.603227   64608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 21:12:37.609545   64608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 21:12:37.622080   64608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 21:12:37.626921   64608 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 21:12:37.626978   64608 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-467273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 21:12:37.627047   64608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 21:12:37.627095   64608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 21:12:37.675037   64608 cri.go:89] found id: ""
	I0708 21:12:37.675123   64608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 21:12:37.685533   64608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:12:37.696677   64608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:12:37.708065   64608 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:12:37.708088   64608 kubeadm.go:156] found existing configuration files:
	
	I0708 21:12:37.708144   64608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 21:12:37.718787   64608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:12:37.718901   64608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:12:37.730324   64608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 21:12:37.741067   64608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:12:37.741137   64608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:12:37.752575   64608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 21:12:37.763476   64608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:12:37.763545   64608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:12:37.774814   64608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 21:12:37.785857   64608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:12:37.785941   64608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:12:37.797393   64608 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:12:38.155183   64608 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:14:36.274747   64608 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 21:14:36.274863   64608 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 21:14:36.276465   64608 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 21:14:36.276534   64608 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:14:36.276611   64608 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:14:36.276710   64608 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:14:36.276792   64608 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:14:36.276844   64608 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:14:36.278595   64608 out.go:204]   - Generating certificates and keys ...
	I0708 21:14:36.278701   64608 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:14:36.278782   64608 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:14:36.278841   64608 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 21:14:36.278894   64608 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 21:14:36.278948   64608 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 21:14:36.278990   64608 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 21:14:36.279039   64608 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 21:14:36.279153   64608 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-467273 localhost] and IPs [192.168.50.94 127.0.0.1 ::1]
	I0708 21:14:36.279221   64608 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 21:14:36.279339   64608 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-467273 localhost] and IPs [192.168.50.94 127.0.0.1 ::1]
	I0708 21:14:36.279395   64608 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 21:14:36.279464   64608 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 21:14:36.279515   64608 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 21:14:36.279565   64608 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:14:36.279610   64608 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:14:36.279663   64608 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:14:36.279727   64608 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:14:36.279782   64608 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:14:36.279908   64608 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:14:36.280010   64608 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:14:36.280058   64608 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:14:36.280120   64608 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:14:36.281676   64608 out.go:204]   - Booting up control plane ...
	I0708 21:14:36.281760   64608 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:14:36.281850   64608 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:14:36.281940   64608 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:14:36.282025   64608 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:14:36.282208   64608 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 21:14:36.282251   64608 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 21:14:36.282307   64608 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 21:14:36.282479   64608 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 21:14:36.282572   64608 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 21:14:36.282778   64608 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 21:14:36.282855   64608 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 21:14:36.283026   64608 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 21:14:36.283088   64608 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 21:14:36.283339   64608 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 21:14:36.283402   64608 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 21:14:36.283615   64608 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 21:14:36.283634   64608 kubeadm.go:309] 
	I0708 21:14:36.283683   64608 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 21:14:36.283746   64608 kubeadm.go:309] 		timed out waiting for the condition
	I0708 21:14:36.283762   64608 kubeadm.go:309] 
	I0708 21:14:36.283791   64608 kubeadm.go:309] 	This error is likely caused by:
	I0708 21:14:36.283824   64608 kubeadm.go:309] 		- The kubelet is not running
	I0708 21:14:36.283922   64608 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 21:14:36.283929   64608 kubeadm.go:309] 
	I0708 21:14:36.284017   64608 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 21:14:36.284051   64608 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 21:14:36.284081   64608 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 21:14:36.284088   64608 kubeadm.go:309] 
	I0708 21:14:36.284217   64608 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 21:14:36.284330   64608 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 21:14:36.284339   64608 kubeadm.go:309] 
	I0708 21:14:36.284439   64608 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 21:14:36.284528   64608 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 21:14:36.284599   64608 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 21:14:36.284674   64608 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 21:14:36.284732   64608 kubeadm.go:309] 
	W0708 21:14:36.284818   64608 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-467273 localhost] and IPs [192.168.50.94 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-467273 localhost] and IPs [192.168.50.94 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-467273 localhost] and IPs [192.168.50.94 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-467273 localhost] and IPs [192.168.50.94 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0708 21:14:36.284884   64608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:14:36.756210   64608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:14:36.771786   64608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:14:36.782415   64608 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:14:36.782440   64608 kubeadm.go:156] found existing configuration files:
	
	I0708 21:14:36.782506   64608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 21:14:36.792998   64608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:14:36.793053   64608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:14:36.804120   64608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 21:14:36.814323   64608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:14:36.814387   64608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:14:36.824986   64608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 21:14:36.835505   64608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:14:36.835568   64608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:14:36.847189   64608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 21:14:36.858138   64608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:14:36.858212   64608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:14:36.871262   64608 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:14:36.945867   64608 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 21:14:36.946064   64608 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:14:37.096950   64608 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:14:37.097119   64608 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:14:37.097266   64608 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:14:37.302327   64608 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:14:37.304218   64608 out.go:204]   - Generating certificates and keys ...
	I0708 21:14:37.304330   64608 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:14:37.304429   64608 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:14:37.304542   64608 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:14:37.304634   64608 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:14:37.304725   64608 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:14:37.304803   64608 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:14:37.305082   64608 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:14:37.305454   64608 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:14:37.305761   64608 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:14:37.306559   64608 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:14:37.306877   64608 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:14:37.306972   64608 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:14:37.417372   64608 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:14:37.574510   64608 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:14:37.691673   64608 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:14:37.959331   64608 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:14:37.983186   64608 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:14:37.983303   64608 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:14:37.983361   64608 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:14:38.157784   64608 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:14:38.159598   64608 out.go:204]   - Booting up control plane ...
	I0708 21:14:38.159721   64608 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:14:38.163164   64608 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:14:38.164239   64608 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:14:38.164975   64608 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:14:38.167369   64608 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 21:15:18.170102   64608 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 21:15:18.170313   64608 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 21:15:18.170551   64608 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 21:15:23.171535   64608 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 21:15:23.171740   64608 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 21:15:33.172628   64608 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 21:15:33.172858   64608 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 21:15:53.174672   64608 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 21:15:53.174921   64608 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 21:16:33.173815   64608 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 21:16:33.174120   64608 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 21:16:33.174149   64608 kubeadm.go:309] 
	I0708 21:16:33.174200   64608 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 21:16:33.174254   64608 kubeadm.go:309] 		timed out waiting for the condition
	I0708 21:16:33.174266   64608 kubeadm.go:309] 
	I0708 21:16:33.174331   64608 kubeadm.go:309] 	This error is likely caused by:
	I0708 21:16:33.174388   64608 kubeadm.go:309] 		- The kubelet is not running
	I0708 21:16:33.174523   64608 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 21:16:33.174533   64608 kubeadm.go:309] 
	I0708 21:16:33.174662   64608 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 21:16:33.174705   64608 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 21:16:33.174750   64608 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 21:16:33.174760   64608 kubeadm.go:309] 
	I0708 21:16:33.174885   64608 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 21:16:33.175028   64608 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 21:16:33.175042   64608 kubeadm.go:309] 
	I0708 21:16:33.175178   64608 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 21:16:33.175284   64608 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 21:16:33.175398   64608 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 21:16:33.175511   64608 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 21:16:33.175523   64608 kubeadm.go:309] 
	I0708 21:16:33.176379   64608 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:16:33.176484   64608 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 21:16:33.176564   64608 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 21:16:33.176635   64608 kubeadm.go:393] duration metric: took 3m55.549659088s to StartCluster
	I0708 21:16:33.176697   64608 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:16:33.176753   64608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:16:33.226229   64608 cri.go:89] found id: ""
	I0708 21:16:33.226257   64608 logs.go:276] 0 containers: []
	W0708 21:16:33.226268   64608 logs.go:278] No container was found matching "kube-apiserver"
	I0708 21:16:33.226275   64608 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:16:33.226338   64608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:16:33.265656   64608 cri.go:89] found id: ""
	I0708 21:16:33.265687   64608 logs.go:276] 0 containers: []
	W0708 21:16:33.265698   64608 logs.go:278] No container was found matching "etcd"
	I0708 21:16:33.265705   64608 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:16:33.265763   64608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:16:33.304538   64608 cri.go:89] found id: ""
	I0708 21:16:33.304562   64608 logs.go:276] 0 containers: []
	W0708 21:16:33.304570   64608 logs.go:278] No container was found matching "coredns"
	I0708 21:16:33.304575   64608 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:16:33.304619   64608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:16:33.342368   64608 cri.go:89] found id: ""
	I0708 21:16:33.342394   64608 logs.go:276] 0 containers: []
	W0708 21:16:33.342401   64608 logs.go:278] No container was found matching "kube-scheduler"
	I0708 21:16:33.342406   64608 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:16:33.342454   64608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:16:33.379641   64608 cri.go:89] found id: ""
	I0708 21:16:33.379670   64608 logs.go:276] 0 containers: []
	W0708 21:16:33.379678   64608 logs.go:278] No container was found matching "kube-proxy"
	I0708 21:16:33.379684   64608 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:16:33.379737   64608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:16:33.416657   64608 cri.go:89] found id: ""
	I0708 21:16:33.416683   64608 logs.go:276] 0 containers: []
	W0708 21:16:33.416697   64608 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 21:16:33.416704   64608 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:16:33.416766   64608 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:16:33.453324   64608 cri.go:89] found id: ""
	I0708 21:16:33.453348   64608 logs.go:276] 0 containers: []
	W0708 21:16:33.453358   64608 logs.go:278] No container was found matching "kindnet"
	I0708 21:16:33.453369   64608 logs.go:123] Gathering logs for dmesg ...
	I0708 21:16:33.453384   64608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:16:33.467172   64608 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:16:33.467216   64608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 21:16:33.611863   64608 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 21:16:33.611890   64608 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:16:33.611906   64608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:16:33.712589   64608 logs.go:123] Gathering logs for container status ...
	I0708 21:16:33.712632   64608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:16:33.788514   64608 logs.go:123] Gathering logs for kubelet ...
	I0708 21:16:33.788546   64608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 21:16:33.861845   64608 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0708 21:16:33.861895   64608 out.go:239] * 
	* 
	W0708 21:16:33.861958   64608 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 21:16:33.861987   64608 out.go:239] * 
	* 
	W0708 21:16:33.863039   64608 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 21:16:33.866316   64608 out.go:177] 
	W0708 21:16:33.867687   64608 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 21:16:33.867750   64608 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0708 21:16:33.867780   64608 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0708 21:16:33.869533   64608 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-467273 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-467273
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-467273: (6.312711196s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-467273 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-467273 status --format={{.Host}}: exit status 7 (65.850994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-467273 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-467273 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.403373213s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-467273 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-467273 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-467273 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.765734ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-467273] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19195
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-467273
	    minikube start -p kubernetes-upgrade-467273 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4672732 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-467273 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-467273 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0708 21:17:19.106148   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
E0708 21:17:46.790478   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-467273 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.490284916s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-08 21:18:15.356982631 +0000 UTC m=+6555.269860441
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-467273 -n kubernetes-upgrade-467273
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-467273 logs -n 25
E0708 21:18:15.845905   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:18:17.126321   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-467273 logs -n 25: (1.786914919s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-733920 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-733920                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:50 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028021                  | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071971  | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-239931                 | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071971       | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC | 08 Jul 24 21:01 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 21:12 UTC | 08 Jul 24 21:12 UTC |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:16 UTC | 08 Jul 24 21:16 UTC |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:16 UTC | 08 Jul 24 21:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:17 UTC |
	| start   | -p stopped-upgrade-957981                              | minikube                     | jenkins | v1.26.0 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:17 UTC |
	|         | --memory=2200 --vm-driver=kvm2                         |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	| delete  | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:17 UTC |
	| start   | -p newest-cni-292907 --memory=2200 --alsologtostderr   | newest-cni-292907            | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | stopped-upgrade-957981 stop                            | minikube                     | jenkins | v1.26.0 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:17 UTC |
	| start   | -p stopped-upgrade-957981                              | stopped-upgrade-957981       | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 21:17:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 21:17:56.865029   67600 out.go:291] Setting OutFile to fd 1 ...
	I0708 21:17:56.865202   67600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 21:17:56.865221   67600 out.go:304] Setting ErrFile to fd 2...
	I0708 21:17:56.865229   67600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 21:17:56.865702   67600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 21:17:56.866518   67600 out.go:298] Setting JSON to false
	I0708 21:17:56.867428   67600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7226,"bootTime":1720466251,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 21:17:56.867524   67600 start.go:139] virtualization: kvm guest
	I0708 21:17:56.869962   67600 out.go:177] * [stopped-upgrade-957981] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 21:17:56.871329   67600 notify.go:220] Checking for updates...
	I0708 21:17:56.871334   67600 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 21:17:56.873144   67600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 21:17:56.874594   67600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:17:56.876067   67600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:17:56.877314   67600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 21:17:56.878504   67600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 21:17:56.880138   67600 config.go:182] Loaded profile config "stopped-upgrade-957981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0708 21:17:56.880555   67600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:17:56.880616   67600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:17:56.896292   67600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38445
	I0708 21:17:56.896717   67600 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:17:56.897248   67600 main.go:141] libmachine: Using API Version  1
	I0708 21:17:56.897289   67600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:17:56.897723   67600 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:17:56.897898   67600 main.go:141] libmachine: (stopped-upgrade-957981) Calling .DriverName
	I0708 21:17:56.899787   67600 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0708 21:17:52.258348   66668 machine.go:94] provisionDockerMachine start ...
	I0708 21:17:52.258381   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:17:52.258629   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:17:52.262342   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.262836   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:17:52.262880   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.263159   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:17:52.263370   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:17:52.263568   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:17:52.263733   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:17:52.263929   66668 main.go:141] libmachine: Using SSH client type: native
	I0708 21:17:52.264148   66668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:17:52.264164   66668 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 21:17:52.372618   66668 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-467273
	
	I0708 21:17:52.372656   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetMachineName
	I0708 21:17:52.372926   66668 buildroot.go:166] provisioning hostname "kubernetes-upgrade-467273"
	I0708 21:17:52.372948   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetMachineName
	I0708 21:17:52.373142   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:17:52.376754   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.377293   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:17:52.377326   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.377679   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:17:52.377885   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:17:52.378054   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:17:52.378246   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:17:52.378400   66668 main.go:141] libmachine: Using SSH client type: native
	I0708 21:17:52.378627   66668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:17:52.378648   66668 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-467273 && echo "kubernetes-upgrade-467273" | sudo tee /etc/hostname
	I0708 21:17:52.513360   66668 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-467273
	
	I0708 21:17:52.513391   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:17:52.517311   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.517752   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:17:52.517824   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.519179   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:17:52.519491   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:17:52.519676   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:17:52.519891   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:17:52.520097   66668 main.go:141] libmachine: Using SSH client type: native
	I0708 21:17:52.520397   66668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:17:52.520423   66668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-467273' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-467273/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-467273' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 21:17:52.649617   66668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 21:17:52.649651   66668 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 21:17:52.649700   66668 buildroot.go:174] setting up certificates
	I0708 21:17:52.649721   66668 provision.go:84] configureAuth start
	I0708 21:17:52.649741   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetMachineName
	I0708 21:17:52.650035   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetIP
	I0708 21:17:52.653162   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.653596   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:17:52.653639   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.653860   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:17:52.656833   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.657426   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:17:52.657461   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.657678   66668 provision.go:143] copyHostCerts
	I0708 21:17:52.657771   66668 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 21:17:52.657786   66668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 21:17:52.657862   66668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 21:17:52.657982   66668 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 21:17:52.657997   66668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 21:17:52.658028   66668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 21:17:52.658106   66668 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 21:17:52.658118   66668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 21:17:52.658145   66668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 21:17:52.658207   66668 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-467273 san=[127.0.0.1 192.168.50.94 kubernetes-upgrade-467273 localhost minikube]
	I0708 21:17:52.991674   66668 provision.go:177] copyRemoteCerts
	I0708 21:17:52.991755   66668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 21:17:52.991788   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:17:52.995942   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.996412   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:17:52.996453   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:52.996853   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:17:52.997069   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:17:52.997267   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:17:52.997389   66668 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa Username:docker}
	I0708 21:17:53.087711   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 21:17:53.126422   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 21:17:53.156257   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0708 21:17:53.197860   66668 provision.go:87] duration metric: took 548.126118ms to configureAuth
	I0708 21:17:53.197894   66668 buildroot.go:189] setting minikube options for container-runtime
	I0708 21:17:53.198139   66668 config.go:182] Loaded profile config "kubernetes-upgrade-467273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:17:53.198227   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:17:53.201699   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:53.202174   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:17:53.202206   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:53.202448   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:17:53.202693   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:17:53.202899   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:17:53.203066   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:17:53.203251   66668 main.go:141] libmachine: Using SSH client type: native
	I0708 21:17:53.203500   66668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:17:53.203521   66668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 21:17:56.900997   67600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 21:17:56.901314   67600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:17:56.901355   67600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:17:56.917190   67600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43407
	I0708 21:17:56.917742   67600 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:17:56.918349   67600 main.go:141] libmachine: Using API Version  1
	I0708 21:17:56.918384   67600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:17:56.918905   67600 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:17:56.919104   67600 main.go:141] libmachine: (stopped-upgrade-957981) Calling .DriverName
	I0708 21:17:56.959349   67600 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 21:17:56.960735   67600 start.go:297] selected driver: kvm2
	I0708 21:17:56.960751   67600 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-957981 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-957
981 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 21:17:56.960876   67600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 21:17:56.961730   67600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 21:17:56.961822   67600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 21:17:56.977877   67600 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 21:17:56.978267   67600 cni.go:84] Creating CNI manager for ""
	I0708 21:17:56.978289   67600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:17:56.978353   67600 start.go:340] cluster config:
	{Name:stopped-upgrade-957981 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-957981 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 21:17:56.978489   67600 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 21:17:56.981261   67600 out.go:177] * Starting "stopped-upgrade-957981" primary control-plane node in "stopped-upgrade-957981" cluster
	I0708 21:17:57.988718   66608 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.296508638s)
	I0708 21:17:57.988757   66608 crio.go:469] duration metric: took 2.296635151s to extract the tarball
	I0708 21:17:57.988766   66608 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 21:17:58.026170   66608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 21:17:58.071835   66608 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 21:17:58.071861   66608 cache_images.go:84] Images are preloaded, skipping loading
	I0708 21:17:58.071872   66608 kubeadm.go:928] updating node { 192.168.61.147 8443 v1.30.2 crio true true} ...
	I0708 21:17:58.071993   66608 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-292907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:newest-cni-292907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 21:17:58.072073   66608 ssh_runner.go:195] Run: crio config
	I0708 21:17:58.122192   66608 cni.go:84] Creating CNI manager for ""
	I0708 21:17:58.122215   66608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:17:58.122229   66608 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0708 21:17:58.122253   66608 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.147 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-292907 NodeName:newest-cni-292907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 21:17:58.122386   66608 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-292907"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 21:17:58.122444   66608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 21:17:58.133250   66608 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 21:17:58.133326   66608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 21:17:58.144044   66608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0708 21:17:58.166516   66608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 21:17:58.185945   66608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0708 21:17:58.204788   66608 ssh_runner.go:195] Run: grep 192.168.61.147	control-plane.minikube.internal$ /etc/hosts
	I0708 21:17:58.208821   66608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 21:17:58.222112   66608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:17:58.352402   66608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:17:58.380319   66608 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907 for IP: 192.168.61.147
	I0708 21:17:58.380348   66608 certs.go:194] generating shared ca certs ...
	I0708 21:17:58.380369   66608 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:17:58.380563   66608 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 21:17:58.380628   66608 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 21:17:58.380645   66608 certs.go:256] generating profile certs ...
	I0708 21:17:58.380716   66608 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/client.key
	I0708 21:17:58.380745   66608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/client.crt with IP's: []
	I0708 21:17:58.654247   66608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/client.crt ...
	I0708 21:17:58.654278   66608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/client.crt: {Name:mk23e5a2ffaafaa50aa98b2acfdc54598c3e39e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:17:58.662048   66608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/client.key ...
	I0708 21:17:58.662078   66608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/client.key: {Name:mk67458e4685daed58c77be22165624478c395c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:17:58.662241   66608 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.key.148a608d
	I0708 21:17:58.662263   66608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.crt.148a608d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.147]
	I0708 21:17:58.736756   66608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.crt.148a608d ...
	I0708 21:17:58.736784   66608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.crt.148a608d: {Name:mkb0e155f385a694437bcfe2b79f811e44fbc2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:17:58.736943   66608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.key.148a608d ...
	I0708 21:17:58.736956   66608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.key.148a608d: {Name:mke6e999a844451907227b0c6c49502475438ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:17:58.737031   66608 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.crt.148a608d -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.crt
	I0708 21:17:58.737110   66608 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.key.148a608d -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.key
	I0708 21:17:58.737178   66608 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/proxy-client.key
	I0708 21:17:58.737196   66608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/proxy-client.crt with IP's: []
	I0708 21:17:58.807358   66608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/proxy-client.crt ...
	I0708 21:17:58.807386   66608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/proxy-client.crt: {Name:mk6a13692fff9440a27df6c1662f5046152086c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:17:58.837408   66608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/proxy-client.key ...
	I0708 21:17:58.837449   66608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/proxy-client.key: {Name:mkdf6f441bffd162e9f9cd55212cb490dccfd63c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:17:58.837783   66608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 21:17:58.837838   66608 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 21:17:58.837846   66608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 21:17:58.837882   66608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 21:17:58.837912   66608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 21:17:58.837941   66608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 21:17:58.837996   66608 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 21:17:58.838858   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 21:17:58.875890   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 21:17:58.904867   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 21:17:58.933150   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 21:17:58.963507   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 21:17:58.990646   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 21:17:59.018740   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 21:17:59.047683   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/newest-cni-292907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 21:17:59.076861   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 21:17:59.108134   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 21:17:59.143584   66608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 21:17:59.179569   66608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 21:17:59.224692   66608 ssh_runner.go:195] Run: openssl version
	I0708 21:17:59.232467   66608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 21:17:59.246085   66608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:17:59.252911   66608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:17:59.252984   66608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:17:59.264731   66608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 21:17:59.277677   66608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 21:17:59.289704   66608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 21:17:59.294705   66608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 21:17:59.294778   66608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 21:17:59.300965   66608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 21:17:59.312800   66608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 21:17:59.325302   66608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 21:17:59.331080   66608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 21:17:59.331149   66608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 21:17:59.337605   66608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 21:17:59.353394   66608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 21:17:59.360136   66608 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 21:17:59.360225   66608 kubeadm.go:391] StartCluster: {Name:newest-cni-292907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:newest-cni-292907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 21:17:59.360360   66608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 21:17:59.360455   66608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 21:17:59.404151   66608 cri.go:89] found id: ""
	I0708 21:17:59.404238   66608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 21:17:59.415364   66608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:17:59.426129   66608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:17:59.436185   66608 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:17:59.436205   66608 kubeadm.go:156] found existing configuration files:
	
	I0708 21:17:59.436251   66608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 21:17:59.445794   66608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:17:59.445873   66608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:17:59.455975   66608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 21:17:59.465400   66608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:17:59.465468   66608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:17:59.478088   66608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 21:17:59.491164   66608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:17:59.491228   66608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:17:59.504230   66608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 21:17:59.516356   66608 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:17:59.516411   66608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:17:59.526562   66608 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:17:59.651526   66608 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:17:59.651604   66608 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:17:59.795678   66608 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:17:59.795829   66608 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:17:59.796022   66608 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:18:00.040282   66608 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:17:56.982501   67600 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0708 21:17:56.982565   67600 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0708 21:17:56.982587   67600 cache.go:56] Caching tarball of preloaded images
	I0708 21:17:56.982684   67600 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 21:17:56.982696   67600 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0708 21:17:56.982789   67600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/stopped-upgrade-957981/config.json ...
	I0708 21:17:56.983013   67600 start.go:360] acquireMachinesLock for stopped-upgrade-957981: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 21:18:00.860497   67600 start.go:364] duration metric: took 3.87744482s to acquireMachinesLock for "stopped-upgrade-957981"
	I0708 21:18:00.860570   67600 start.go:96] Skipping create...Using existing machine configuration
	I0708 21:18:00.860589   67600 fix.go:54] fixHost starting: 
	I0708 21:18:00.861051   67600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:18:00.861110   67600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:18:00.880726   67600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45399
	I0708 21:18:00.881210   67600 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:18:00.881761   67600 main.go:141] libmachine: Using API Version  1
	I0708 21:18:00.881784   67600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:18:00.882083   67600 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:18:00.882243   67600 main.go:141] libmachine: (stopped-upgrade-957981) Calling .DriverName
	I0708 21:18:00.882411   67600 main.go:141] libmachine: (stopped-upgrade-957981) Calling .GetState
	I0708 21:18:00.884096   67600 fix.go:112] recreateIfNeeded on stopped-upgrade-957981: state=Stopped err=<nil>
	I0708 21:18:00.884121   67600 main.go:141] libmachine: (stopped-upgrade-957981) Calling .DriverName
	W0708 21:18:00.884270   67600 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 21:18:00.886040   67600 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-957981" ...
	I0708 21:18:00.115534   66608 out.go:204]   - Generating certificates and keys ...
	I0708 21:18:00.115704   66608 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:18:00.115796   66608 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:18:00.538821   66608 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 21:18:00.678220   66608 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 21:18:00.975558   66608 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 21:18:00.887855   67600 main.go:141] libmachine: (stopped-upgrade-957981) Calling .Start
	I0708 21:18:00.888048   67600 main.go:141] libmachine: (stopped-upgrade-957981) Ensuring networks are active...
	I0708 21:18:00.888860   67600 main.go:141] libmachine: (stopped-upgrade-957981) Ensuring network default is active
	I0708 21:18:00.889233   67600 main.go:141] libmachine: (stopped-upgrade-957981) Ensuring network mk-stopped-upgrade-957981 is active
	I0708 21:18:00.889647   67600 main.go:141] libmachine: (stopped-upgrade-957981) Getting domain xml...
	I0708 21:18:00.890397   67600 main.go:141] libmachine: (stopped-upgrade-957981) Creating domain...
	I0708 21:18:00.614132   66668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 21:18:00.614160   66668 machine.go:97] duration metric: took 8.355792069s to provisionDockerMachine
	I0708 21:18:00.614173   66668 start.go:293] postStartSetup for "kubernetes-upgrade-467273" (driver="kvm2")
	I0708 21:18:00.614182   66668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 21:18:00.614251   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:18:00.614557   66668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 21:18:00.614583   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:18:00.617467   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:00.617795   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:18:00.617836   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:00.618004   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:18:00.618208   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:18:00.618366   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:18:00.618514   66668 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa Username:docker}
	I0708 21:18:00.710786   66668 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 21:18:00.715408   66668 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 21:18:00.715435   66668 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 21:18:00.715522   66668 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 21:18:00.715627   66668 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 21:18:00.715745   66668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 21:18:00.726174   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 21:18:00.753895   66668 start.go:296] duration metric: took 139.70973ms for postStartSetup
	I0708 21:18:00.753939   66668 fix.go:56] duration metric: took 8.521127404s for fixHost
	I0708 21:18:00.753959   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:18:00.757098   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:00.757528   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:18:00.757560   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:00.757760   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:18:00.757985   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:18:00.758162   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:18:00.758300   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:18:00.758489   66668 main.go:141] libmachine: Using SSH client type: native
	I0708 21:18:00.758707   66668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0708 21:18:00.758720   66668 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 21:18:00.860338   66668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720473480.843018798
	
	I0708 21:18:00.860362   66668 fix.go:216] guest clock: 1720473480.843018798
	I0708 21:18:00.860371   66668 fix.go:229] Guest: 2024-07-08 21:18:00.843018798 +0000 UTC Remote: 2024-07-08 21:18:00.753944009 +0000 UTC m=+43.884182055 (delta=89.074789ms)
	I0708 21:18:00.860405   66668 fix.go:200] guest clock delta is within tolerance: 89.074789ms
	I0708 21:18:00.860412   66668 start.go:83] releasing machines lock for "kubernetes-upgrade-467273", held for 8.627634219s
	I0708 21:18:00.860443   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:18:00.860710   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetIP
	I0708 21:18:00.863494   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:00.863891   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:18:00.863921   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:00.864118   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:18:00.864595   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:18:00.864774   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:18:00.864865   66668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 21:18:00.864905   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:18:00.864988   66668 ssh_runner.go:195] Run: cat /version.json
	I0708 21:18:00.865001   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHHostname
	I0708 21:18:00.867539   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:00.867761   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:00.867889   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:18:00.867919   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:00.868042   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:18:00.868193   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:18:00.868212   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:18:00.868221   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:00.868401   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:18:00.868411   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHPort
	I0708 21:18:00.868573   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHKeyPath
	I0708 21:18:00.868587   66668 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa Username:docker}
	I0708 21:18:00.868682   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetSSHUsername
	I0708 21:18:00.868826   66668 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kubernetes-upgrade-467273/id_rsa Username:docker}
	I0708 21:18:00.970844   66668 ssh_runner.go:195] Run: systemctl --version
	I0708 21:18:00.978916   66668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 21:18:01.143749   66668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 21:18:01.155012   66668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 21:18:01.155086   66668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 21:18:01.166167   66668 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0708 21:18:01.166189   66668 start.go:494] detecting cgroup driver to use...
	I0708 21:18:01.166261   66668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 21:18:01.191032   66668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 21:18:01.207736   66668 docker.go:217] disabling cri-docker service (if available) ...
	I0708 21:18:01.207811   66668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 21:18:01.225622   66668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 21:18:01.241332   66668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 21:18:01.405762   66668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 21:18:01.585335   66668 docker.go:233] disabling docker service ...
	I0708 21:18:01.585420   66668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 21:18:01.604108   66668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 21:18:01.620326   66668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 21:18:01.785317   66668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 21:18:01.177130   66608 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 21:18:01.631941   66608 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 21:18:01.632362   66608 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-292907] and IPs [192.168.61.147 127.0.0.1 ::1]
	I0708 21:18:01.829388   66608 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 21:18:01.829558   66608 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-292907] and IPs [192.168.61.147 127.0.0.1 ::1]
	I0708 21:18:01.908738   66608 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 21:18:02.102941   66608 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 21:18:02.188882   66608 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 21:18:02.189104   66608 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:18:02.368804   66608 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:18:02.627951   66608 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:18:02.708683   66608 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:18:02.813773   66608 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:18:03.143208   66608 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:18:03.144247   66608 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:18:03.148250   66608 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:18:01.933108   66668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 21:18:01.951041   66668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 21:18:01.972379   66668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 21:18:01.972448   66668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:18:01.988109   66668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 21:18:01.988191   66668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:18:01.999954   66668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:18:02.011402   66668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:18:02.022916   66668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 21:18:02.035313   66668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:18:02.048055   66668 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:18:02.063050   66668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:18:02.134578   66668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 21:18:02.178667   66668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 21:18:02.203812   66668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:18:02.567682   66668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 21:18:03.083400   66668 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 21:18:03.083538   66668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 21:18:03.090582   66668 start.go:562] Will wait 60s for crictl version
	I0708 21:18:03.090660   66668 ssh_runner.go:195] Run: which crictl
	I0708 21:18:03.096557   66668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 21:18:03.154167   66668 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 21:18:03.154248   66668 ssh_runner.go:195] Run: crio --version
	I0708 21:18:03.202032   66668 ssh_runner.go:195] Run: crio --version
	I0708 21:18:03.321157   66668 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 21:18:03.149888   66608 out.go:204]   - Booting up control plane ...
	I0708 21:18:03.150038   66608 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:18:03.150170   66608 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:18:03.151052   66608 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:18:03.177201   66608 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:18:03.177339   66608 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:18:03.177445   66608 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:18:03.344341   66608 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:18:03.344479   66608 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:18:04.346836   66608 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002602894s
	I0708 21:18:04.346923   66608 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:18:02.266112   67600 main.go:141] libmachine: (stopped-upgrade-957981) Waiting to get IP...
	I0708 21:18:02.267270   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:18:02.267812   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:18:02.267908   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | I0708 21:18:02.267793   67653 retry.go:31] will retry after 236.102978ms: waiting for machine to come up
	I0708 21:18:02.505489   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:18:02.506253   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:18:02.506280   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | I0708 21:18:02.506199   67653 retry.go:31] will retry after 254.462864ms: waiting for machine to come up
	I0708 21:18:02.762801   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:18:02.763398   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:18:02.763427   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | I0708 21:18:02.763359   67653 retry.go:31] will retry after 389.264192ms: waiting for machine to come up
	I0708 21:18:03.154041   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:18:03.154607   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:18:03.154739   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | I0708 21:18:03.154658   67653 retry.go:31] will retry after 600.230002ms: waiting for machine to come up
	I0708 21:18:03.757046   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:18:03.757632   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:18:03.757659   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | I0708 21:18:03.757571   67653 retry.go:31] will retry after 664.590101ms: waiting for machine to come up
	I0708 21:18:04.423636   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:18:04.424224   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:18:04.424340   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | I0708 21:18:04.424296   67653 retry.go:31] will retry after 659.959675ms: waiting for machine to come up
	I0708 21:18:05.085649   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:18:05.086180   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:18:05.086213   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | I0708 21:18:05.086133   67653 retry.go:31] will retry after 846.442652ms: waiting for machine to come up
	I0708 21:18:05.934162   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:18:05.934711   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:18:05.934740   67600 main.go:141] libmachine: (stopped-upgrade-957981) DBG | I0708 21:18:05.934658   67653 retry.go:31] will retry after 1.150768111s: waiting for machine to come up
	I0708 21:18:03.322520   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetIP
	I0708 21:18:03.326406   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:03.327019   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:18:03.327052   66668 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:18:03.327396   66668 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0708 21:18:03.336169   66668 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-467273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 21:18:03.336298   66668 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 21:18:03.336365   66668 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 21:18:03.489438   66668 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 21:18:03.489466   66668 crio.go:433] Images already preloaded, skipping extraction
	I0708 21:18:03.489544   66668 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 21:18:03.653573   66668 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 21:18:03.653605   66668 cache_images.go:84] Images are preloaded, skipping loading
	I0708 21:18:03.653616   66668 kubeadm.go:928] updating node { 192.168.50.94 8443 v1.30.2 crio true true} ...
	I0708 21:18:03.653749   66668 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-467273 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 21:18:03.653835   66668 ssh_runner.go:195] Run: crio config
	I0708 21:18:03.798012   66668 cni.go:84] Creating CNI manager for ""
	I0708 21:18:03.798043   66668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:18:03.798056   66668 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 21:18:03.798085   66668 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.94 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-467273 NodeName:kubernetes-upgrade-467273 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 21:18:03.798282   66668 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-467273"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 21:18:03.798363   66668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 21:18:03.830743   66668 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 21:18:03.830813   66668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 21:18:03.858695   66668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0708 21:18:03.908818   66668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 21:18:03.951273   66668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0708 21:18:03.985886   66668 ssh_runner.go:195] Run: grep 192.168.50.94	control-plane.minikube.internal$ /etc/hosts
	I0708 21:18:03.990751   66668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:18:04.149859   66668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:18:04.173056   66668 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273 for IP: 192.168.50.94
	I0708 21:18:04.173091   66668 certs.go:194] generating shared ca certs ...
	I0708 21:18:04.173115   66668 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:18:04.173310   66668 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 21:18:04.173360   66668 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 21:18:04.173374   66668 certs.go:256] generating profile certs ...
	I0708 21:18:04.173518   66668 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/client.key
	I0708 21:18:04.173603   66668 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.key.2cb56847
	I0708 21:18:04.173660   66668 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.key
	I0708 21:18:04.173852   66668 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 21:18:04.173909   66668 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 21:18:04.173925   66668 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 21:18:04.173962   66668 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 21:18:04.174000   66668 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 21:18:04.174103   66668 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 21:18:04.174173   66668 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 21:18:04.174998   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 21:18:04.205379   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 21:18:04.248132   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 21:18:04.284047   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 21:18:04.315431   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0708 21:18:04.343830   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 21:18:04.372825   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 21:18:04.401662   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 21:18:04.468960   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 21:18:04.500776   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 21:18:04.542769   66668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 21:18:04.580048   66668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 21:18:04.604701   66668 ssh_runner.go:195] Run: openssl version
	I0708 21:18:04.610995   66668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 21:18:04.628483   66668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:18:04.635322   66668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:18:04.635430   66668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:18:04.642747   66668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 21:18:04.663834   66668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 21:18:04.685300   66668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 21:18:04.691026   66668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 21:18:04.691110   66668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 21:18:04.700020   66668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 21:18:04.717921   66668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 21:18:04.736214   66668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 21:18:04.742166   66668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 21:18:04.742240   66668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 21:18:04.749075   66668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 21:18:04.759919   66668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 21:18:04.765214   66668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 21:18:04.771630   66668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 21:18:04.778226   66668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 21:18:04.784934   66668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 21:18:04.791183   66668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 21:18:04.798101   66668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 21:18:04.804925   66668 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-467273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.2 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 21:18:04.805035   66668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 21:18:04.805118   66668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 21:18:04.866250   66668 cri.go:89] found id: "6ec806866ca631d72235bcaf31b8c3a6439349e3a2c00b0c9999b1c91d2adc2d"
	I0708 21:18:04.866276   66668 cri.go:89] found id: "1ca6f3ba21d6445f4f1ed191cf76ccbdd69f8088fb4c28ab4acb44b61804a516"
	I0708 21:18:04.866282   66668 cri.go:89] found id: "ebb6717a634a33d7230b360b492870c7fee8ac0c9e80e2804b1a8af393075dd1"
	I0708 21:18:04.866286   66668 cri.go:89] found id: "a08979d1646d3437c9a7a4e9ae5917b894109e9a2f13b961f5537b8225b8c9ad"
	I0708 21:18:04.866312   66668 cri.go:89] found id: "63788a8f11108b30c01ec38f8da70fff034ac5a71d8002a6d91b4a03ffb4c6f7"
	I0708 21:18:04.866318   66668 cri.go:89] found id: "e1f4492b25a5b4576b5fd3f159ecf0a5d52736139362b77cf47529443d5b0df1"
	I0708 21:18:04.866322   66668 cri.go:89] found id: "66e5cc4589ff7e75e23447bd3643762736fea338e498253d01641dadcdcb797b"
	I0708 21:18:04.866326   66668 cri.go:89] found id: "dea6aa9b4403572f5e4006c9244da739424672674d3259a565840f4eaaeeef6f"
	I0708 21:18:04.866331   66668 cri.go:89] found id: "304ab05083f078646efbe619d780f14ff81eedb9b315c6ec41a1097cbae15a5d"
	I0708 21:18:04.866341   66668 cri.go:89] found id: "f07294129c3ea9efb64438915b2fea07ce5d6449909fa5edb0fb83d1c052900c"
	I0708 21:18:04.866345   66668 cri.go:89] found id: "3c2b24b44895ac3da43e46ce8cc2bf1757325d6d6329eb36e185cfd40c666255"
	I0708 21:18:04.866349   66668 cri.go:89] found id: ""
	I0708 21:18:04.866398   66668 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.086950610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473496086907282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07553a25-3dc7-4297-9b66-e795f65f4138 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.087908661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff5596d2-8cde-4ae9-a4d8-09f51cb6c0a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.087981500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff5596d2-8cde-4ae9-a4d8-09f51cb6c0a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.088682076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47062c80f416e4f8fd09f7c80d4791f911b1b5d626614f242e8dd922347c45cf,PodSandboxId:cb2dc3ed980ec6952aff1f6afdabf102010945fd01e41ff6e42cbf867a0bfc2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720473493349630089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hjjdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee139105-983d-447e-8969-98af5280b677,},Annotations:map[string]string{io.kubernetes.container.hash: db17f48b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20324272ab48858b32ee57fec580c6bc4ab578926c213af0360a73de1a40abf,PodSandboxId:65b6d30fd055fd14df94ff7a39e169fa471bfe3e49210188208b3bfbc42f8ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720473493178622017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcxdn,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3ec07c28-e759-4112-b3ac-a2cd608e41c7,},Annotations:map[string]string{io.kubernetes.container.hash: 890aef9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2656f352eee9a169e63d0b40fec2da1941e9dc27ed7db3ea39c46e84fb871aa,PodSandboxId:6af6b8c6acde7c0cbdc535bd1a5c8d204f437b98eaeda37ae37dd43c518f53f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAIN
ER_RUNNING,CreatedAt:1720473492423174753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctwmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67,},Annotations:map[string]string{io.kubernetes.container.hash: b43268da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a63bdd18ed271b4836ee70d2a447e42f6b09088eaa36d8265f0a1d525b5a441,PodSandboxId:a15045a6e643ac280ce552534528259f09459c3f46c38c067387d01da9ff52ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
0473492405159060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190ec35a-f56e-4a19-9ac0-2b0f1e08aea6,},Annotations:map[string]string{io.kubernetes.container.hash: 136c162c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c2e3cd55e26cae56fc771f5c05dc95d1beb780604fdd1f93d6e6a2d6467a59,PodSandboxId:1e5628668dfc27843d82384866156741ffbd4f9c0115c4bb9dce6ccb97f71e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720473487922514207,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de4879fb01c45585f9bef3ac07e1783,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3d6f9773988cce15fbe631c076bd20f8b84ccdd1ade27536d60a3a14ef2185,PodSandboxId:8e5d89a98e08a2d9d8dd5001dece9e15e970ec10429a6bc67bd18fd14dc69053,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720473487734845328,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec8cc4f88e01bcbb3d480d89714d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: cc27cd46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f9fbe4e7ba9c8e82acec66eb70818f7a5f47b0a7adff02c0caad4b920253987,PodSandboxId:1f050778945e292e4d5bbdd3f6c42f25105ff89417f70e5b78b016d3d42cd811,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720473487583979864,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3203022973f92002ef0f8cb9f3f1b678,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc26d2a2ef065e690624540f7b3cc719a4c78644ed3adf19f74e2864d79c10e5,PodSandboxId:28e973e38a075ab912df22dd848c3dd3304271d122d97ef749fa1860fc6bac0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720473487571091970
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1024904b88169618252305745d985e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e2d61a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec806866ca631d72235bcaf31b8c3a6439349e3a2c00b0c9999b1c91d2adc2d,PodSandboxId:1f050778945e292e4d5bbdd3f6c42f25105ff89417f70e5b78b016d3d42cd811,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720473484515211824,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3203022973f92002ef0f8cb9f3f1b678,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca6f3ba21d6445f4f1ed191cf76ccbdd69f8088fb4c28ab4acb44b61804a516,PodSandboxId:a15045a6e643ac280ce552534528259f09459c3f46c38c067387d01da9ff52ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720473483629404317
,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190ec35a-f56e-4a19-9ac0-2b0f1e08aea6,},Annotations:map[string]string{io.kubernetes.container.hash: 136c162c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb6717a634a33d7230b360b492870c7fee8ac0c9e80e2804b1a8af393075dd1,PodSandboxId:6af6b8c6acde7c0cbdc535bd1a5c8d204f437b98eaeda37ae37dd43c518f53f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720473483526722876,Labels:map[string]str
ing{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctwmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67,},Annotations:map[string]string{io.kubernetes.container.hash: b43268da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08979d1646d3437c9a7a4e9ae5917b894109e9a2f13b961f5537b8225b8c9ad,PodSandboxId:28e973e38a075ab912df22dd848c3dd3304271d122d97ef749fa1860fc6bac0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720473483464029150,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1024904b88169618252305745d985e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e2d61a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63788a8f11108b30c01ec38f8da70fff034ac5a71d8002a6d91b4a03ffb4c6f7,PodSandboxId:5f911ff17bc3528212382330047cd8236eca51d4a42c11dc4072fc3e0d808f5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720473450574009273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-kcxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec07c28-e759-4112-b3ac-a2cd608e41c7,},Annotations:map[string]string{io.kubernetes.container.hash: 890aef9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f4492b25a5b4576b5fd3f159ecf0a5d52736139362b77cf47529443d5b0df1,PodSandboxId:8803192ccc577f72254df04c65d80008c5c725d2dc35be4384fda4322753f849,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba
382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720473450553838918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hjjdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee139105-983d-447e-8969-98af5280b677,},Annotations:map[string]string{io.kubernetes.container.hash: db17f48b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07294129c3ea9efb64438915b2fea07ce5d6449909fa5edb0fb83d1c052900c,PodSandboxId:1193bc18861c4fd13c6f75821d31a31ec3dbb6000811705b9369b7c164839f79,Metadata:&ContainerMetadata{Name:kube-schedu
ler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720473429500276446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de4879fb01c45585f9bef3ac07e1783,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304ab05083f078646efbe619d780f14ff81eedb9b315c6ec41a1097cbae15a5d,PodSandboxId:7b58fa9d8b57d2a84550ca31c2c65ffd11ef548e8d8f30a3d7ecf1ef22d71965,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720473429519633649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec8cc4f88e01bcbb3d480d89714d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: cc27cd46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff5596d2-8cde-4ae9-a4d8-09f51cb6c0a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.135609523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9295206a-f2af-42d0-a3d5-2b210bd41996 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.135730399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9295206a-f2af-42d0-a3d5-2b210bd41996 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.137004145Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58c35589-622c-4551-84c7-0c8f6a5ac644 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.137670987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473496137625912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58c35589-622c-4551-84c7-0c8f6a5ac644 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.139110888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c439c4d-ac6a-405c-9962-0587a92f4cb5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.139193887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c439c4d-ac6a-405c-9962-0587a92f4cb5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.139876164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47062c80f416e4f8fd09f7c80d4791f911b1b5d626614f242e8dd922347c45cf,PodSandboxId:cb2dc3ed980ec6952aff1f6afdabf102010945fd01e41ff6e42cbf867a0bfc2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720473493349630089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hjjdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee139105-983d-447e-8969-98af5280b677,},Annotations:map[string]string{io.kubernetes.container.hash: db17f48b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20324272ab48858b32ee57fec580c6bc4ab578926c213af0360a73de1a40abf,PodSandboxId:65b6d30fd055fd14df94ff7a39e169fa471bfe3e49210188208b3bfbc42f8ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720473493178622017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcxdn,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3ec07c28-e759-4112-b3ac-a2cd608e41c7,},Annotations:map[string]string{io.kubernetes.container.hash: 890aef9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2656f352eee9a169e63d0b40fec2da1941e9dc27ed7db3ea39c46e84fb871aa,PodSandboxId:6af6b8c6acde7c0cbdc535bd1a5c8d204f437b98eaeda37ae37dd43c518f53f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAIN
ER_RUNNING,CreatedAt:1720473492423174753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctwmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67,},Annotations:map[string]string{io.kubernetes.container.hash: b43268da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a63bdd18ed271b4836ee70d2a447e42f6b09088eaa36d8265f0a1d525b5a441,PodSandboxId:a15045a6e643ac280ce552534528259f09459c3f46c38c067387d01da9ff52ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
0473492405159060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190ec35a-f56e-4a19-9ac0-2b0f1e08aea6,},Annotations:map[string]string{io.kubernetes.container.hash: 136c162c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c2e3cd55e26cae56fc771f5c05dc95d1beb780604fdd1f93d6e6a2d6467a59,PodSandboxId:1e5628668dfc27843d82384866156741ffbd4f9c0115c4bb9dce6ccb97f71e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720473487922514207,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de4879fb01c45585f9bef3ac07e1783,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3d6f9773988cce15fbe631c076bd20f8b84ccdd1ade27536d60a3a14ef2185,PodSandboxId:8e5d89a98e08a2d9d8dd5001dece9e15e970ec10429a6bc67bd18fd14dc69053,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720473487734845328,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec8cc4f88e01bcbb3d480d89714d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: cc27cd46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f9fbe4e7ba9c8e82acec66eb70818f7a5f47b0a7adff02c0caad4b920253987,PodSandboxId:1f050778945e292e4d5bbdd3f6c42f25105ff89417f70e5b78b016d3d42cd811,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720473487583979864,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3203022973f92002ef0f8cb9f3f1b678,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc26d2a2ef065e690624540f7b3cc719a4c78644ed3adf19f74e2864d79c10e5,PodSandboxId:28e973e38a075ab912df22dd848c3dd3304271d122d97ef749fa1860fc6bac0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720473487571091970
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1024904b88169618252305745d985e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e2d61a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec806866ca631d72235bcaf31b8c3a6439349e3a2c00b0c9999b1c91d2adc2d,PodSandboxId:1f050778945e292e4d5bbdd3f6c42f25105ff89417f70e5b78b016d3d42cd811,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720473484515211824,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3203022973f92002ef0f8cb9f3f1b678,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca6f3ba21d6445f4f1ed191cf76ccbdd69f8088fb4c28ab4acb44b61804a516,PodSandboxId:a15045a6e643ac280ce552534528259f09459c3f46c38c067387d01da9ff52ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720473483629404317
,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190ec35a-f56e-4a19-9ac0-2b0f1e08aea6,},Annotations:map[string]string{io.kubernetes.container.hash: 136c162c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb6717a634a33d7230b360b492870c7fee8ac0c9e80e2804b1a8af393075dd1,PodSandboxId:6af6b8c6acde7c0cbdc535bd1a5c8d204f437b98eaeda37ae37dd43c518f53f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720473483526722876,Labels:map[string]str
ing{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctwmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67,},Annotations:map[string]string{io.kubernetes.container.hash: b43268da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08979d1646d3437c9a7a4e9ae5917b894109e9a2f13b961f5537b8225b8c9ad,PodSandboxId:28e973e38a075ab912df22dd848c3dd3304271d122d97ef749fa1860fc6bac0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720473483464029150,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1024904b88169618252305745d985e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e2d61a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63788a8f11108b30c01ec38f8da70fff034ac5a71d8002a6d91b4a03ffb4c6f7,PodSandboxId:5f911ff17bc3528212382330047cd8236eca51d4a42c11dc4072fc3e0d808f5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720473450574009273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-kcxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec07c28-e759-4112-b3ac-a2cd608e41c7,},Annotations:map[string]string{io.kubernetes.container.hash: 890aef9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f4492b25a5b4576b5fd3f159ecf0a5d52736139362b77cf47529443d5b0df1,PodSandboxId:8803192ccc577f72254df04c65d80008c5c725d2dc35be4384fda4322753f849,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba
382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720473450553838918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hjjdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee139105-983d-447e-8969-98af5280b677,},Annotations:map[string]string{io.kubernetes.container.hash: db17f48b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07294129c3ea9efb64438915b2fea07ce5d6449909fa5edb0fb83d1c052900c,PodSandboxId:1193bc18861c4fd13c6f75821d31a31ec3dbb6000811705b9369b7c164839f79,Metadata:&ContainerMetadata{Name:kube-schedu
ler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720473429500276446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de4879fb01c45585f9bef3ac07e1783,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304ab05083f078646efbe619d780f14ff81eedb9b315c6ec41a1097cbae15a5d,PodSandboxId:7b58fa9d8b57d2a84550ca31c2c65ffd11ef548e8d8f30a3d7ecf1ef22d71965,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720473429519633649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec8cc4f88e01bcbb3d480d89714d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: cc27cd46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c439c4d-ac6a-405c-9962-0587a92f4cb5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.189561623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a14440a0-6e45-4560-9b7b-e47bd6e3a478 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.189643164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a14440a0-6e45-4560-9b7b-e47bd6e3a478 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.194896861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67b880da-0036-4ec4-8638-6d07f282ff8f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.195483807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473496195428899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67b880da-0036-4ec4-8638-6d07f282ff8f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.196515965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20675d54-7775-4d3a-8014-06f69b577c5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.196596127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20675d54-7775-4d3a-8014-06f69b577c5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.196969713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47062c80f416e4f8fd09f7c80d4791f911b1b5d626614f242e8dd922347c45cf,PodSandboxId:cb2dc3ed980ec6952aff1f6afdabf102010945fd01e41ff6e42cbf867a0bfc2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720473493349630089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hjjdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee139105-983d-447e-8969-98af5280b677,},Annotations:map[string]string{io.kubernetes.container.hash: db17f48b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20324272ab48858b32ee57fec580c6bc4ab578926c213af0360a73de1a40abf,PodSandboxId:65b6d30fd055fd14df94ff7a39e169fa471bfe3e49210188208b3bfbc42f8ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720473493178622017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcxdn,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3ec07c28-e759-4112-b3ac-a2cd608e41c7,},Annotations:map[string]string{io.kubernetes.container.hash: 890aef9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2656f352eee9a169e63d0b40fec2da1941e9dc27ed7db3ea39c46e84fb871aa,PodSandboxId:6af6b8c6acde7c0cbdc535bd1a5c8d204f437b98eaeda37ae37dd43c518f53f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAIN
ER_RUNNING,CreatedAt:1720473492423174753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctwmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67,},Annotations:map[string]string{io.kubernetes.container.hash: b43268da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a63bdd18ed271b4836ee70d2a447e42f6b09088eaa36d8265f0a1d525b5a441,PodSandboxId:a15045a6e643ac280ce552534528259f09459c3f46c38c067387d01da9ff52ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
0473492405159060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190ec35a-f56e-4a19-9ac0-2b0f1e08aea6,},Annotations:map[string]string{io.kubernetes.container.hash: 136c162c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c2e3cd55e26cae56fc771f5c05dc95d1beb780604fdd1f93d6e6a2d6467a59,PodSandboxId:1e5628668dfc27843d82384866156741ffbd4f9c0115c4bb9dce6ccb97f71e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720473487922514207,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de4879fb01c45585f9bef3ac07e1783,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3d6f9773988cce15fbe631c076bd20f8b84ccdd1ade27536d60a3a14ef2185,PodSandboxId:8e5d89a98e08a2d9d8dd5001dece9e15e970ec10429a6bc67bd18fd14dc69053,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720473487734845328,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec8cc4f88e01bcbb3d480d89714d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: cc27cd46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f9fbe4e7ba9c8e82acec66eb70818f7a5f47b0a7adff02c0caad4b920253987,PodSandboxId:1f050778945e292e4d5bbdd3f6c42f25105ff89417f70e5b78b016d3d42cd811,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720473487583979864,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3203022973f92002ef0f8cb9f3f1b678,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc26d2a2ef065e690624540f7b3cc719a4c78644ed3adf19f74e2864d79c10e5,PodSandboxId:28e973e38a075ab912df22dd848c3dd3304271d122d97ef749fa1860fc6bac0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720473487571091970
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1024904b88169618252305745d985e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e2d61a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec806866ca631d72235bcaf31b8c3a6439349e3a2c00b0c9999b1c91d2adc2d,PodSandboxId:1f050778945e292e4d5bbdd3f6c42f25105ff89417f70e5b78b016d3d42cd811,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720473484515211824,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3203022973f92002ef0f8cb9f3f1b678,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca6f3ba21d6445f4f1ed191cf76ccbdd69f8088fb4c28ab4acb44b61804a516,PodSandboxId:a15045a6e643ac280ce552534528259f09459c3f46c38c067387d01da9ff52ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720473483629404317
,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190ec35a-f56e-4a19-9ac0-2b0f1e08aea6,},Annotations:map[string]string{io.kubernetes.container.hash: 136c162c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb6717a634a33d7230b360b492870c7fee8ac0c9e80e2804b1a8af393075dd1,PodSandboxId:6af6b8c6acde7c0cbdc535bd1a5c8d204f437b98eaeda37ae37dd43c518f53f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720473483526722876,Labels:map[string]str
ing{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctwmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67,},Annotations:map[string]string{io.kubernetes.container.hash: b43268da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08979d1646d3437c9a7a4e9ae5917b894109e9a2f13b961f5537b8225b8c9ad,PodSandboxId:28e973e38a075ab912df22dd848c3dd3304271d122d97ef749fa1860fc6bac0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720473483464029150,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1024904b88169618252305745d985e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e2d61a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63788a8f11108b30c01ec38f8da70fff034ac5a71d8002a6d91b4a03ffb4c6f7,PodSandboxId:5f911ff17bc3528212382330047cd8236eca51d4a42c11dc4072fc3e0d808f5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720473450574009273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-kcxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec07c28-e759-4112-b3ac-a2cd608e41c7,},Annotations:map[string]string{io.kubernetes.container.hash: 890aef9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f4492b25a5b4576b5fd3f159ecf0a5d52736139362b77cf47529443d5b0df1,PodSandboxId:8803192ccc577f72254df04c65d80008c5c725d2dc35be4384fda4322753f849,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba
382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720473450553838918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hjjdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee139105-983d-447e-8969-98af5280b677,},Annotations:map[string]string{io.kubernetes.container.hash: db17f48b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07294129c3ea9efb64438915b2fea07ce5d6449909fa5edb0fb83d1c052900c,PodSandboxId:1193bc18861c4fd13c6f75821d31a31ec3dbb6000811705b9369b7c164839f79,Metadata:&ContainerMetadata{Name:kube-schedu
ler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720473429500276446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de4879fb01c45585f9bef3ac07e1783,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304ab05083f078646efbe619d780f14ff81eedb9b315c6ec41a1097cbae15a5d,PodSandboxId:7b58fa9d8b57d2a84550ca31c2c65ffd11ef548e8d8f30a3d7ecf1ef22d71965,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720473429519633649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec8cc4f88e01bcbb3d480d89714d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: cc27cd46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20675d54-7775-4d3a-8014-06f69b577c5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.232761504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85b988d1-7297-40dd-aa4b-2f52ec5a4448 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.232886038Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85b988d1-7297-40dd-aa4b-2f52ec5a4448 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.234302012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=687ecf8a-770c-44ea-9fd9-acb63d77a512 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.234845541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473496234818891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=687ecf8a-770c-44ea-9fd9-acb63d77a512 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.235438151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d8d2129-7d04-424f-a487-b8c853126418 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.235513498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d8d2129-7d04-424f-a487-b8c853126418 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:18:16 kubernetes-upgrade-467273 crio[2489]: time="2024-07-08 21:18:16.235912566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47062c80f416e4f8fd09f7c80d4791f911b1b5d626614f242e8dd922347c45cf,PodSandboxId:cb2dc3ed980ec6952aff1f6afdabf102010945fd01e41ff6e42cbf867a0bfc2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720473493349630089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hjjdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee139105-983d-447e-8969-98af5280b677,},Annotations:map[string]string{io.kubernetes.container.hash: db17f48b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20324272ab48858b32ee57fec580c6bc4ab578926c213af0360a73de1a40abf,PodSandboxId:65b6d30fd055fd14df94ff7a39e169fa471bfe3e49210188208b3bfbc42f8ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720473493178622017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kcxdn,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3ec07c28-e759-4112-b3ac-a2cd608e41c7,},Annotations:map[string]string{io.kubernetes.container.hash: 890aef9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2656f352eee9a169e63d0b40fec2da1941e9dc27ed7db3ea39c46e84fb871aa,PodSandboxId:6af6b8c6acde7c0cbdc535bd1a5c8d204f437b98eaeda37ae37dd43c518f53f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAIN
ER_RUNNING,CreatedAt:1720473492423174753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctwmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67,},Annotations:map[string]string{io.kubernetes.container.hash: b43268da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a63bdd18ed271b4836ee70d2a447e42f6b09088eaa36d8265f0a1d525b5a441,PodSandboxId:a15045a6e643ac280ce552534528259f09459c3f46c38c067387d01da9ff52ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
0473492405159060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190ec35a-f56e-4a19-9ac0-2b0f1e08aea6,},Annotations:map[string]string{io.kubernetes.container.hash: 136c162c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c2e3cd55e26cae56fc771f5c05dc95d1beb780604fdd1f93d6e6a2d6467a59,PodSandboxId:1e5628668dfc27843d82384866156741ffbd4f9c0115c4bb9dce6ccb97f71e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720473487922514207,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de4879fb01c45585f9bef3ac07e1783,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3d6f9773988cce15fbe631c076bd20f8b84ccdd1ade27536d60a3a14ef2185,PodSandboxId:8e5d89a98e08a2d9d8dd5001dece9e15e970ec10429a6bc67bd18fd14dc69053,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720473487734845328,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec8cc4f88e01bcbb3d480d89714d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: cc27cd46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f9fbe4e7ba9c8e82acec66eb70818f7a5f47b0a7adff02c0caad4b920253987,PodSandboxId:1f050778945e292e4d5bbdd3f6c42f25105ff89417f70e5b78b016d3d42cd811,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720473487583979864,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3203022973f92002ef0f8cb9f3f1b678,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc26d2a2ef065e690624540f7b3cc719a4c78644ed3adf19f74e2864d79c10e5,PodSandboxId:28e973e38a075ab912df22dd848c3dd3304271d122d97ef749fa1860fc6bac0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720473487571091970
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1024904b88169618252305745d985e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e2d61a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec806866ca631d72235bcaf31b8c3a6439349e3a2c00b0c9999b1c91d2adc2d,PodSandboxId:1f050778945e292e4d5bbdd3f6c42f25105ff89417f70e5b78b016d3d42cd811,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1720473484515211824,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3203022973f92002ef0f8cb9f3f1b678,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca6f3ba21d6445f4f1ed191cf76ccbdd69f8088fb4c28ab4acb44b61804a516,PodSandboxId:a15045a6e643ac280ce552534528259f09459c3f46c38c067387d01da9ff52ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720473483629404317
,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190ec35a-f56e-4a19-9ac0-2b0f1e08aea6,},Annotations:map[string]string{io.kubernetes.container.hash: 136c162c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb6717a634a33d7230b360b492870c7fee8ac0c9e80e2804b1a8af393075dd1,PodSandboxId:6af6b8c6acde7c0cbdc535bd1a5c8d204f437b98eaeda37ae37dd43c518f53f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1720473483526722876,Labels:map[string]str
ing{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctwmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67,},Annotations:map[string]string{io.kubernetes.container.hash: b43268da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08979d1646d3437c9a7a4e9ae5917b894109e9a2f13b961f5537b8225b8c9ad,PodSandboxId:28e973e38a075ab912df22dd848c3dd3304271d122d97ef749fa1860fc6bac0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1720473483464029150,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1024904b88169618252305745d985e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e2d61a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63788a8f11108b30c01ec38f8da70fff034ac5a71d8002a6d91b4a03ffb4c6f7,PodSandboxId:5f911ff17bc3528212382330047cd8236eca51d4a42c11dc4072fc3e0d808f5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720473450574009273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns
-7db6d8ff4d-kcxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec07c28-e759-4112-b3ac-a2cd608e41c7,},Annotations:map[string]string{io.kubernetes.container.hash: 890aef9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f4492b25a5b4576b5fd3f159ecf0a5d52736139362b77cf47529443d5b0df1,PodSandboxId:8803192ccc577f72254df04c65d80008c5c725d2dc35be4384fda4322753f849,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba
382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1720473450553838918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hjjdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee139105-983d-447e-8969-98af5280b677,},Annotations:map[string]string{io.kubernetes.container.hash: db17f48b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07294129c3ea9efb64438915b2fea07ce5d6449909fa5edb0fb83d1c052900c,PodSandboxId:1193bc18861c4fd13c6f75821d31a31ec3dbb6000811705b9369b7c164839f79,Metadata:&ContainerMetadata{Name:kube-schedu
ler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1720473429500276446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de4879fb01c45585f9bef3ac07e1783,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304ab05083f078646efbe619d780f14ff81eedb9b315c6ec41a1097cbae15a5d,PodSandboxId:7b58fa9d8b57d2a84550ca31c2c65ffd11ef548e8d8f30a3d7ecf1ef22d71965,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1720473429519633649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-467273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec8cc4f88e01bcbb3d480d89714d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: cc27cd46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d8d2129-7d04-424f-a487-b8c853126418 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	47062c80f416e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago        Running             coredns                   1                   cb2dc3ed980ec       coredns-7db6d8ff4d-hjjdp
	c20324272ab48       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   1                   65b6d30fd055f       coredns-7db6d8ff4d-kcxdn
	b2656f352eee9       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   3 seconds ago        Running             kube-proxy                2                   6af6b8c6acde7       kube-proxy-ctwmk
	5a63bdd18ed27       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   a15045a6e643a       storage-provisioner
	22c2e3cd55e26       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   8 seconds ago        Running             kube-scheduler            1                   1e5628668dfc2       kube-scheduler-kubernetes-upgrade-467273
	7c3d6f9773988       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   8 seconds ago        Running             kube-apiserver            1                   8e5d89a98e08a       kube-apiserver-kubernetes-upgrade-467273
	1f9fbe4e7ba9c       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   8 seconds ago        Running             kube-controller-manager   2                   1f050778945e2       kube-controller-manager-kubernetes-upgrade-467273
	bc26d2a2ef065       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   8 seconds ago        Running             etcd                      2                   28e973e38a075       etcd-kubernetes-upgrade-467273
	6ec806866ca63       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   11 seconds ago       Exited              kube-controller-manager   1                   1f050778945e2       kube-controller-manager-kubernetes-upgrade-467273
	1ca6f3ba21d64       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago       Exited              storage-provisioner       1                   a15045a6e643a       storage-provisioner
	ebb6717a634a3       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   12 seconds ago       Exited              kube-proxy                1                   6af6b8c6acde7       kube-proxy-ctwmk
	a08979d1646d3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   12 seconds ago       Exited              etcd                      1                   28e973e38a075       etcd-kubernetes-upgrade-467273
	63788a8f11108       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   45 seconds ago       Exited              coredns                   0                   5f911ff17bc35       coredns-7db6d8ff4d-kcxdn
	e1f4492b25a5b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   45 seconds ago       Exited              coredns                   0                   8803192ccc577       coredns-7db6d8ff4d-hjjdp
	304ab05083f07       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   About a minute ago   Exited              kube-apiserver            0                   7b58fa9d8b57d       kube-apiserver-kubernetes-upgrade-467273
	f07294129c3ea       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   About a minute ago   Exited              kube-scheduler            0                   1193bc18861c4       kube-scheduler-kubernetes-upgrade-467273
	
	
	==> coredns [47062c80f416e4f8fd09f7c80d4791f911b1b5d626614f242e8dd922347c45cf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [63788a8f11108b30c01ec38f8da70fff034ac5a71d8002a6d91b4a03ffb4c6f7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c20324272ab48858b32ee57fec580c6bc4ab578926c213af0360a73de1a40abf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e1f4492b25a5b4576b5fd3f159ecf0a5d52736139362b77cf47529443d5b0df1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-467273
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-467273
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 21:17:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-467273
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 21:18:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 21:18:11 +0000   Mon, 08 Jul 2024 21:17:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 21:18:11 +0000   Mon, 08 Jul 2024 21:17:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 21:18:11 +0000   Mon, 08 Jul 2024 21:17:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 21:18:11 +0000   Mon, 08 Jul 2024 21:17:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.94
	  Hostname:    kubernetes-upgrade-467273
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1dcc7ec6f0143dca1f9b2d883d10512
	  System UUID:                a1dcc7ec-6f01-43dc-a1f9-b2d883d10512
	  Boot ID:                    b3cd725e-2ebf-4af1-946b-4c99c2e54c31
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-hjjdp                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     47s
	  kube-system                 coredns-7db6d8ff4d-kcxdn                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     47s
	  kube-system                 etcd-kubernetes-upgrade-467273                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 kube-apiserver-kubernetes-upgrade-467273             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-467273    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-proxy-ctwmk                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kube-scheduler-kubernetes-upgrade-467273             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 46s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node kubernetes-upgrade-467273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x7 over 68s)  kubelet          Node kubernetes-upgrade-467273 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node kubernetes-upgrade-467273 status is now: NodeHasSufficientMemory
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           48s                node-controller  Node kubernetes-upgrade-467273 event: Registered Node kubernetes-upgrade-467273 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-467273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-467273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-467273 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.360460] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.066020] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076802] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.169438] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.155644] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[Jul 8 21:17] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +5.052557] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.064567] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.512291] systemd-fstab-generator[856]: Ignoring "noauto" option for root device
	[  +6.955470] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	[  +0.075558] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.610923] kauditd_printk_skb: 18 callbacks suppressed
	[Jul 8 21:18] systemd-fstab-generator[2189]: Ignoring "noauto" option for root device
	[  +0.084314] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.076902] systemd-fstab-generator[2201]: Ignoring "noauto" option for root device
	[  +0.204656] systemd-fstab-generator[2215]: Ignoring "noauto" option for root device
	[  +0.164874] systemd-fstab-generator[2227]: Ignoring "noauto" option for root device
	[  +0.508801] systemd-fstab-generator[2299]: Ignoring "noauto" option for root device
	[  +1.693877] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +2.802924] systemd-fstab-generator[3083]: Ignoring "noauto" option for root device
	[  +0.081315] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.598395] kauditd_printk_skb: 51 callbacks suppressed
	[  +1.640201] systemd-fstab-generator[3880]: Ignoring "noauto" option for root device
	
	
	==> etcd [a08979d1646d3437c9a7a4e9ae5917b894109e9a2f13b961f5537b8225b8c9ad] <==
	{"level":"info","ts":"2024-07-08T21:18:03.731565Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"10.689981ms"}
	{"level":"info","ts":"2024-07-08T21:18:03.751696Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-08T21:18:03.781241Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ea1ef65d35c8a708","local-member-id":"edae0ed0fe08603a","commit-index":408}
	{"level":"info","ts":"2024-07-08T21:18:03.78148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-08T21:18:03.781679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a became follower at term 2"}
	{"level":"info","ts":"2024-07-08T21:18:03.7817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft edae0ed0fe08603a [peers: [], term: 2, commit: 408, applied: 0, lastindex: 408, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-08T21:18:03.784944Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-08T21:18:03.799685Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":395}
	{"level":"info","ts":"2024-07-08T21:18:03.80205Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-08T21:18:03.807925Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"edae0ed0fe08603a","timeout":"7s"}
	{"level":"info","ts":"2024-07-08T21:18:03.808253Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"edae0ed0fe08603a"}
	{"level":"info","ts":"2024-07-08T21:18:03.8084Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"edae0ed0fe08603a","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-08T21:18:03.812024Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-08T21:18:03.814141Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T21:18:03.814195Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T21:18:03.814208Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T21:18:03.815874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a switched to configuration voters=(17126642723714523194)"}
	{"level":"info","ts":"2024-07-08T21:18:03.81596Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ea1ef65d35c8a708","local-member-id":"edae0ed0fe08603a","added-peer-id":"edae0ed0fe08603a","added-peer-peer-urls":["https://192.168.50.94:2380"]}
	{"level":"info","ts":"2024-07-08T21:18:03.816082Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ea1ef65d35c8a708","local-member-id":"edae0ed0fe08603a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:18:03.816115Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:18:03.821575Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T21:18:03.82185Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"edae0ed0fe08603a","initial-advertise-peer-urls":["https://192.168.50.94:2380"],"listen-peer-urls":["https://192.168.50.94:2380"],"advertise-client-urls":["https://192.168.50.94:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.94:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T21:18:03.821915Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T21:18:03.822032Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.94:2380"}
	{"level":"info","ts":"2024-07-08T21:18:03.822064Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.94:2380"}
	
	
	==> etcd [bc26d2a2ef065e690624540f7b3cc719a4c78644ed3adf19f74e2864d79c10e5] <==
	{"level":"info","ts":"2024-07-08T21:18:08.00245Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T21:18:08.002526Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T21:18:08.002924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a switched to configuration voters=(17126642723714523194)"}
	{"level":"info","ts":"2024-07-08T21:18:08.00341Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ea1ef65d35c8a708","local-member-id":"edae0ed0fe08603a","added-peer-id":"edae0ed0fe08603a","added-peer-peer-urls":["https://192.168.50.94:2380"]}
	{"level":"info","ts":"2024-07-08T21:18:08.004001Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ea1ef65d35c8a708","local-member-id":"edae0ed0fe08603a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:18:08.00421Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:18:08.01494Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T21:18:08.01748Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"edae0ed0fe08603a","initial-advertise-peer-urls":["https://192.168.50.94:2380"],"listen-peer-urls":["https://192.168.50.94:2380"],"advertise-client-urls":["https://192.168.50.94:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.94:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T21:18:08.017644Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T21:18:08.015297Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.94:2380"}
	{"level":"info","ts":"2024-07-08T21:18:08.01903Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.94:2380"}
	{"level":"info","ts":"2024-07-08T21:18:09.772432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T21:18:09.772579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T21:18:09.772658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a received MsgPreVoteResp from edae0ed0fe08603a at term 2"}
	{"level":"info","ts":"2024-07-08T21:18:09.772706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a became candidate at term 3"}
	{"level":"info","ts":"2024-07-08T21:18:09.772739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a received MsgVoteResp from edae0ed0fe08603a at term 3"}
	{"level":"info","ts":"2024-07-08T21:18:09.772777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a became leader at term 3"}
	{"level":"info","ts":"2024-07-08T21:18:09.77281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: edae0ed0fe08603a elected leader edae0ed0fe08603a at term 3"}
	{"level":"info","ts":"2024-07-08T21:18:09.775185Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"edae0ed0fe08603a","local-member-attributes":"{Name:kubernetes-upgrade-467273 ClientURLs:[https://192.168.50.94:2379]}","request-path":"/0/members/edae0ed0fe08603a/attributes","cluster-id":"ea1ef65d35c8a708","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T21:18:09.775359Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T21:18:09.775484Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T21:18:09.77551Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T21:18:09.775495Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T21:18:09.777525Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.94:2379"}
	{"level":"info","ts":"2024-07-08T21:18:09.778486Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:18:16 up 1 min,  0 users,  load average: 1.96, 0.59, 0.21
	Linux kubernetes-upgrade-467273 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [304ab05083f078646efbe619d780f14ff81eedb9b315c6ec41a1097cbae15a5d] <==
	I0708 21:17:14.678074       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 21:17:14.686583       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 21:17:15.254225       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 21:17:15.268392       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 21:17:15.290605       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0708 21:17:15.308943       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 21:17:29.446537       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0708 21:17:29.595901       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0708 21:17:53.344996       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0708 21:17:53.346198       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.346293       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.346860       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.347032       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.347148       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.347207       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.350786       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.350891       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.351066       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.351208       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.351277       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.353682       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.353768       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.356615       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.364734       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 21:17:53.365627       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7c3d6f9773988cce15fbe631c076bd20f8b84ccdd1ade27536d60a3a14ef2185] <==
	I0708 21:18:11.368766       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0708 21:18:11.478237       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0708 21:18:11.484893       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0708 21:18:11.487813       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0708 21:18:11.489747       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0708 21:18:11.490061       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0708 21:18:11.490178       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0708 21:18:11.490388       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0708 21:18:11.490826       1 shared_informer.go:320] Caches are synced for configmaps
	I0708 21:18:11.493086       1 aggregator.go:165] initial CRD sync complete...
	I0708 21:18:11.493193       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 21:18:11.493225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 21:18:11.493254       1 cache.go:39] Caches are synced for autoregister controller
	I0708 21:18:11.495533       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 21:18:11.496716       1 policy_source.go:224] refreshing policies
	I0708 21:18:11.507908       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0708 21:18:11.518128       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0708 21:18:11.544913       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0708 21:18:12.366517       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0708 21:18:12.897606       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 21:18:13.870679       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 21:18:13.882242       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 21:18:13.926939       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 21:18:14.042007       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 21:18:14.049927       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [1f9fbe4e7ba9c8e82acec66eb70818f7a5f47b0a7adff02c0caad4b920253987] <==
	I0708 21:18:14.592383       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0708 21:18:14.592577       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0708 21:18:14.597053       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0708 21:18:14.597283       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0708 21:18:14.597376       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0708 21:18:14.609484       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0708 21:18:14.611972       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0708 21:18:14.612047       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0708 21:18:14.612087       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0708 21:18:14.619537       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0708 21:18:14.619665       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0708 21:18:14.619763       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0708 21:18:14.619853       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0708 21:18:14.620037       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0708 21:18:14.620048       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0708 21:18:14.620095       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0708 21:18:14.620103       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0708 21:18:14.620120       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0708 21:18:14.620124       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0708 21:18:14.620136       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0708 21:18:14.620266       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0708 21:18:14.620836       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0708 21:18:14.626912       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0708 21:18:14.628173       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0708 21:18:14.628194       1 shared_informer.go:313] Waiting for caches to sync for TTL
	
	
	==> kube-controller-manager [6ec806866ca631d72235bcaf31b8c3a6439349e3a2c00b0c9999b1c91d2adc2d] <==
	
	
	==> kube-proxy [b2656f352eee9a169e63d0b40fec2da1941e9dc27ed7db3ea39c46e84fb871aa] <==
	I0708 21:18:12.980669       1 server_linux.go:69] "Using iptables proxy"
	I0708 21:18:13.007670       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.94"]
	I0708 21:18:13.105550       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 21:18:13.105703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 21:18:13.105745       1 server_linux.go:165] "Using iptables Proxier"
	I0708 21:18:13.115218       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 21:18:13.115778       1 server.go:872] "Version info" version="v1.30.2"
	I0708 21:18:13.116063       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 21:18:13.133059       1 config.go:319] "Starting node config controller"
	I0708 21:18:13.136540       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 21:18:13.134132       1 config.go:192] "Starting service config controller"
	I0708 21:18:13.137585       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 21:18:13.134409       1 config.go:101] "Starting endpoint slice config controller"
	I0708 21:18:13.137606       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 21:18:13.241990       1 shared_informer.go:320] Caches are synced for node config
	I0708 21:18:13.242041       1 shared_informer.go:320] Caches are synced for service config
	I0708 21:18:13.242162       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ebb6717a634a33d7230b360b492870c7fee8ac0c9e80e2804b1a8af393075dd1] <==
	I0708 21:18:03.944887       1 server_linux.go:69] "Using iptables proxy"
	E0708 21:18:03.950047       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-467273\": dial tcp 192.168.50.94:8443: connect: connection refused"
	E0708 21:18:04.999878       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-467273\": dial tcp 192.168.50.94:8443: connect: connection refused"
	
	
	==> kube-scheduler [22c2e3cd55e26cae56fc771f5c05dc95d1beb780604fdd1f93d6e6a2d6467a59] <==
	I0708 21:18:09.361080       1 serving.go:380] Generated self-signed cert in-memory
	W0708 21:18:11.433251       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 21:18:11.433389       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 21:18:11.433404       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 21:18:11.433413       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 21:18:11.482938       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 21:18:11.485078       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 21:18:11.489661       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 21:18:11.490545       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 21:18:11.495784       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 21:18:11.490564       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 21:18:11.596780       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f07294129c3ea9efb64438915b2fea07ce5d6449909fa5edb0fb83d1c052900c] <==
	E0708 21:17:13.647460       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 21:17:13.659240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 21:17:13.659425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 21:17:13.699688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 21:17:13.699924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 21:17:13.734524       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 21:17:13.734658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 21:17:13.740150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 21:17:13.740273       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 21:17:13.759657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 21:17:13.759892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 21:17:13.831062       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 21:17:13.831426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 21:17:13.831724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 21:17:13.831827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 21:17:13.856644       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 21:17:13.856831       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 21:17:13.864277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 21:17:13.864473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 21:17:13.887171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 21:17:13.887658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0708 21:17:14.059946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 21:17:14.060211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0708 21:17:16.054681       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0708 21:17:53.343649       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 08 21:18:08 kubernetes-upgrade-467273 kubelet[3090]: W0708 21:18:08.031551    3090 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-467273&limit=500&resourceVersion=0": dial tcp 192.168.50.94:8443: connect: connection refused
	Jul 08 21:18:08 kubernetes-upgrade-467273 kubelet[3090]: E0708 21:18:08.031634    3090 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-467273&limit=500&resourceVersion=0": dial tcp 192.168.50.94:8443: connect: connection refused
	Jul 08 21:18:08 kubernetes-upgrade-467273 kubelet[3090]: W0708 21:18:08.073596    3090 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.94:8443: connect: connection refused
	Jul 08 21:18:08 kubernetes-upgrade-467273 kubelet[3090]: E0708 21:18:08.073678    3090 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.94:8443: connect: connection refused
	Jul 08 21:18:08 kubernetes-upgrade-467273 kubelet[3090]: W0708 21:18:08.080581    3090 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.94:8443: connect: connection refused
	Jul 08 21:18:08 kubernetes-upgrade-467273 kubelet[3090]: E0708 21:18:08.080639    3090 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.94:8443: connect: connection refused
	Jul 08 21:18:08 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:08.579242    3090 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-467273"
	Jul 08 21:18:11 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:11.602574    3090 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-467273"
	Jul 08 21:18:11 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:11.602724    3090 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-467273"
	Jul 08 21:18:11 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:11.604625    3090 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 08 21:18:11 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:11.605938    3090 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.050966    3090 apiserver.go:52] "Watching apiserver"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.054696    3090 topology_manager.go:215] "Topology Admit Handler" podUID="190ec35a-f56e-4a19-9ac0-2b0f1e08aea6" podNamespace="kube-system" podName="storage-provisioner"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.055079    3090 topology_manager.go:215] "Topology Admit Handler" podUID="cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67" podNamespace="kube-system" podName="kube-proxy-ctwmk"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.055237    3090 topology_manager.go:215] "Topology Admit Handler" podUID="ee139105-983d-447e-8969-98af5280b677" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hjjdp"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.055411    3090 topology_manager.go:215] "Topology Admit Handler" podUID="3ec07c28-e759-4112-b3ac-a2cd608e41c7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kcxdn"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.070520    3090 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.110082    3090 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/190ec35a-f56e-4a19-9ac0-2b0f1e08aea6-tmp\") pod \"storage-provisioner\" (UID: \"190ec35a-f56e-4a19-9ac0-2b0f1e08aea6\") " pod="kube-system/storage-provisioner"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.110486    3090 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67-lib-modules\") pod \"kube-proxy-ctwmk\" (UID: \"cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67\") " pod="kube-system/kube-proxy-ctwmk"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.110622    3090 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67-xtables-lock\") pod \"kube-proxy-ctwmk\" (UID: \"cc527ce5-a1f1-4dd5-a87b-0fa8f1814a67\") " pod="kube-system/kube-proxy-ctwmk"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.356891    3090 scope.go:117] "RemoveContainer" containerID="1ca6f3ba21d6445f4f1ed191cf76ccbdd69f8088fb4c28ab4acb44b61804a516"
	Jul 08 21:18:12 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:12.357198    3090 scope.go:117] "RemoveContainer" containerID="ebb6717a634a33d7230b360b492870c7fee8ac0c9e80e2804b1a8af393075dd1"
	Jul 08 21:18:15 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:15.325216    3090 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 08 21:18:15 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:15.793907    3090 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 08 21:18:16 kubernetes-upgrade-467273 kubelet[3090]: I0708 21:18:16.545581    3090 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [1ca6f3ba21d6445f4f1ed191cf76ccbdd69f8088fb4c28ab4acb44b61804a516] <==
	I0708 21:18:03.881269       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0708 21:18:03.891865       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [5a63bdd18ed271b4836ee70d2a447e42f6b09088eaa36d8265f0a1d525b5a441] <==
	I0708 21:18:12.798118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 21:18:12.864272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 21:18:12.864392       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 21:18:12.923024       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 21:18:12.927698       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-467273_39d0c06b-e236-4fb0-954b-0e8800a52d58!
	I0708 21:18:12.938017       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d168e890-2964-45cc-887f-768ed9f199be", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-467273_39d0c06b-e236-4fb0-954b-0e8800a52d58 became leader
	I0708 21:18:13.028087       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-467273_39d0c06b-e236-4fb0-954b-0e8800a52d58!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 21:18:15.689094   67837 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19195-5988/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-467273 -n kubernetes-upgrade-467273
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-467273 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-467273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-467273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-467273: (1.134205002s)
--- FAIL: TestKubernetesUpgrade (375.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (295.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-914355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-914355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m55.050407329s)

                                                
                                                
-- stdout --
	* [old-k8s-version-914355] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19195
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-914355" primary control-plane node in "old-k8s-version-914355" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:41:59.649790   49548 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:41:59.649887   49548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:41:59.649892   49548 out.go:304] Setting ErrFile to fd 2...
	I0708 20:41:59.649898   49548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:41:59.650147   49548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:41:59.650951   49548 out.go:298] Setting JSON to false
	I0708 20:41:59.651885   49548 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5069,"bootTime":1720466251,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:41:59.651946   49548 start.go:139] virtualization: kvm guest
	I0708 20:41:59.653772   49548 out.go:177] * [old-k8s-version-914355] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:41:59.655987   49548 notify.go:220] Checking for updates...
	I0708 20:41:59.657363   49548 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:41:59.658771   49548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:41:59.662283   49548 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:41:59.664569   49548 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:41:59.667086   49548 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:41:59.670019   49548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:41:59.671521   49548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:41:59.710109   49548 out.go:177] * Using the kvm2 driver based on user configuration
	I0708 20:41:59.711812   49548 start.go:297] selected driver: kvm2
	I0708 20:41:59.711934   49548 start.go:901] validating driver "kvm2" against <nil>
	I0708 20:41:59.711959   49548 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:41:59.713047   49548 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:41:59.735938   49548 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:41:59.753486   49548 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:41:59.753529   49548 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 20:41:59.753764   49548 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:41:59.753829   49548 cni.go:84] Creating CNI manager for ""
	I0708 20:41:59.753844   49548 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:41:59.753858   49548 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 20:41:59.753923   49548 start.go:340] cluster config:
	{Name:old-k8s-version-914355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:41:59.754054   49548 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:41:59.755744   49548 out.go:177] * Starting "old-k8s-version-914355" primary control-plane node in "old-k8s-version-914355" cluster
	I0708 20:41:59.756951   49548 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0708 20:41:59.756993   49548 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0708 20:41:59.757005   49548 cache.go:56] Caching tarball of preloaded images
	I0708 20:41:59.757112   49548 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:41:59.757127   49548 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0708 20:41:59.757573   49548 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/config.json ...
	I0708 20:41:59.757604   49548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/config.json: {Name:mk2a3161242d6c0771cce516f169d56cc007d51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:41:59.757795   49548 start.go:360] acquireMachinesLock for old-k8s-version-914355: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:42:25.424618   49548 start.go:364] duration metric: took 25.666793473s to acquireMachinesLock for "old-k8s-version-914355"
	I0708 20:42:25.424694   49548 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-914355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 20:42:25.424802   49548 start.go:125] createHost starting for "" (driver="kvm2")
	I0708 20:42:25.427089   49548 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 20:42:25.427374   49548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:42:25.427438   49548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:42:25.445124   49548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44683
	I0708 20:42:25.445568   49548 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:42:25.446160   49548 main.go:141] libmachine: Using API Version  1
	I0708 20:42:25.446181   49548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:42:25.446546   49548 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:42:25.446748   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetMachineName
	I0708 20:42:25.446895   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:42:25.447050   49548 start.go:159] libmachine.API.Create for "old-k8s-version-914355" (driver="kvm2")
	I0708 20:42:25.447076   49548 client.go:168] LocalClient.Create starting
	I0708 20:42:25.447103   49548 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 20:42:25.447141   49548 main.go:141] libmachine: Decoding PEM data...
	I0708 20:42:25.447159   49548 main.go:141] libmachine: Parsing certificate...
	I0708 20:42:25.447209   49548 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 20:42:25.447226   49548 main.go:141] libmachine: Decoding PEM data...
	I0708 20:42:25.447238   49548 main.go:141] libmachine: Parsing certificate...
	I0708 20:42:25.447253   49548 main.go:141] libmachine: Running pre-create checks...
	I0708 20:42:25.447264   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .PreCreateCheck
	I0708 20:42:25.447591   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetConfigRaw
	I0708 20:42:25.447964   49548 main.go:141] libmachine: Creating machine...
	I0708 20:42:25.447977   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .Create
	I0708 20:42:25.448101   49548 main.go:141] libmachine: (old-k8s-version-914355) Creating KVM machine...
	I0708 20:42:25.449259   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found existing default KVM network
	I0708 20:42:25.450036   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:25.449888   49893 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:74:97:91} reservation:<nil>}
	I0708 20:42:25.450812   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:25.450733   49893 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011da50}
	I0708 20:42:25.450845   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | created network xml: 
	I0708 20:42:25.450859   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | <network>
	I0708 20:42:25.450875   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG |   <name>mk-old-k8s-version-914355</name>
	I0708 20:42:25.450884   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG |   <dns enable='no'/>
	I0708 20:42:25.450895   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG |   
	I0708 20:42:25.450912   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0708 20:42:25.450940   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG |     <dhcp>
	I0708 20:42:25.450982   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0708 20:42:25.450997   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG |     </dhcp>
	I0708 20:42:25.451008   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG |   </ip>
	I0708 20:42:25.451023   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG |   
	I0708 20:42:25.451030   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | </network>
	I0708 20:42:25.451044   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | 
	I0708 20:42:25.455747   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | trying to create private KVM network mk-old-k8s-version-914355 192.168.50.0/24...
	I0708 20:42:25.524025   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | private KVM network mk-old-k8s-version-914355 192.168.50.0/24 created
	I0708 20:42:25.524175   49548 main.go:141] libmachine: (old-k8s-version-914355) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355 ...
	I0708 20:42:25.524206   49548 main.go:141] libmachine: (old-k8s-version-914355) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 20:42:25.524219   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:25.524162   49893 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:42:25.524296   49548 main.go:141] libmachine: (old-k8s-version-914355) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 20:42:25.748467   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:25.748344   49893 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa...
	I0708 20:42:25.849258   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:25.849138   49893 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/old-k8s-version-914355.rawdisk...
	I0708 20:42:25.849285   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Writing magic tar header
	I0708 20:42:25.849341   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Writing SSH key tar header
	I0708 20:42:25.849395   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:25.849251   49893 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355 ...
	I0708 20:42:25.849411   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355
	I0708 20:42:25.849419   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 20:42:25.849430   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:42:25.849444   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 20:42:25.849463   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 20:42:25.849498   49548 main.go:141] libmachine: (old-k8s-version-914355) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355 (perms=drwx------)
	I0708 20:42:25.849508   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Checking permissions on dir: /home/jenkins
	I0708 20:42:25.849521   49548 main.go:141] libmachine: (old-k8s-version-914355) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 20:42:25.849531   49548 main.go:141] libmachine: (old-k8s-version-914355) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 20:42:25.849546   49548 main.go:141] libmachine: (old-k8s-version-914355) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 20:42:25.849559   49548 main.go:141] libmachine: (old-k8s-version-914355) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 20:42:25.849576   49548 main.go:141] libmachine: (old-k8s-version-914355) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 20:42:25.849589   49548 main.go:141] libmachine: (old-k8s-version-914355) Creating domain...
	I0708 20:42:25.849602   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Checking permissions on dir: /home
	I0708 20:42:25.849623   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Skipping /home - not owner
	I0708 20:42:25.850762   49548 main.go:141] libmachine: (old-k8s-version-914355) define libvirt domain using xml: 
	I0708 20:42:25.850785   49548 main.go:141] libmachine: (old-k8s-version-914355) <domain type='kvm'>
	I0708 20:42:25.850796   49548 main.go:141] libmachine: (old-k8s-version-914355)   <name>old-k8s-version-914355</name>
	I0708 20:42:25.850804   49548 main.go:141] libmachine: (old-k8s-version-914355)   <memory unit='MiB'>2200</memory>
	I0708 20:42:25.850813   49548 main.go:141] libmachine: (old-k8s-version-914355)   <vcpu>2</vcpu>
	I0708 20:42:25.850825   49548 main.go:141] libmachine: (old-k8s-version-914355)   <features>
	I0708 20:42:25.850833   49548 main.go:141] libmachine: (old-k8s-version-914355)     <acpi/>
	I0708 20:42:25.850843   49548 main.go:141] libmachine: (old-k8s-version-914355)     <apic/>
	I0708 20:42:25.850853   49548 main.go:141] libmachine: (old-k8s-version-914355)     <pae/>
	I0708 20:42:25.850865   49548 main.go:141] libmachine: (old-k8s-version-914355)     
	I0708 20:42:25.850873   49548 main.go:141] libmachine: (old-k8s-version-914355)   </features>
	I0708 20:42:25.850880   49548 main.go:141] libmachine: (old-k8s-version-914355)   <cpu mode='host-passthrough'>
	I0708 20:42:25.850888   49548 main.go:141] libmachine: (old-k8s-version-914355)   
	I0708 20:42:25.850892   49548 main.go:141] libmachine: (old-k8s-version-914355)   </cpu>
	I0708 20:42:25.850900   49548 main.go:141] libmachine: (old-k8s-version-914355)   <os>
	I0708 20:42:25.850905   49548 main.go:141] libmachine: (old-k8s-version-914355)     <type>hvm</type>
	I0708 20:42:25.850911   49548 main.go:141] libmachine: (old-k8s-version-914355)     <boot dev='cdrom'/>
	I0708 20:42:25.850915   49548 main.go:141] libmachine: (old-k8s-version-914355)     <boot dev='hd'/>
	I0708 20:42:25.850923   49548 main.go:141] libmachine: (old-k8s-version-914355)     <bootmenu enable='no'/>
	I0708 20:42:25.850927   49548 main.go:141] libmachine: (old-k8s-version-914355)   </os>
	I0708 20:42:25.850941   49548 main.go:141] libmachine: (old-k8s-version-914355)   <devices>
	I0708 20:42:25.850948   49548 main.go:141] libmachine: (old-k8s-version-914355)     <disk type='file' device='cdrom'>
	I0708 20:42:25.850957   49548 main.go:141] libmachine: (old-k8s-version-914355)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/boot2docker.iso'/>
	I0708 20:42:25.850967   49548 main.go:141] libmachine: (old-k8s-version-914355)       <target dev='hdc' bus='scsi'/>
	I0708 20:42:25.850996   49548 main.go:141] libmachine: (old-k8s-version-914355)       <readonly/>
	I0708 20:42:25.851018   49548 main.go:141] libmachine: (old-k8s-version-914355)     </disk>
	I0708 20:42:25.851031   49548 main.go:141] libmachine: (old-k8s-version-914355)     <disk type='file' device='disk'>
	I0708 20:42:25.851045   49548 main.go:141] libmachine: (old-k8s-version-914355)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 20:42:25.851069   49548 main.go:141] libmachine: (old-k8s-version-914355)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/old-k8s-version-914355.rawdisk'/>
	I0708 20:42:25.851080   49548 main.go:141] libmachine: (old-k8s-version-914355)       <target dev='hda' bus='virtio'/>
	I0708 20:42:25.851104   49548 main.go:141] libmachine: (old-k8s-version-914355)     </disk>
	I0708 20:42:25.851126   49548 main.go:141] libmachine: (old-k8s-version-914355)     <interface type='network'>
	I0708 20:42:25.851142   49548 main.go:141] libmachine: (old-k8s-version-914355)       <source network='mk-old-k8s-version-914355'/>
	I0708 20:42:25.851150   49548 main.go:141] libmachine: (old-k8s-version-914355)       <model type='virtio'/>
	I0708 20:42:25.851157   49548 main.go:141] libmachine: (old-k8s-version-914355)     </interface>
	I0708 20:42:25.851170   49548 main.go:141] libmachine: (old-k8s-version-914355)     <interface type='network'>
	I0708 20:42:25.851181   49548 main.go:141] libmachine: (old-k8s-version-914355)       <source network='default'/>
	I0708 20:42:25.851193   49548 main.go:141] libmachine: (old-k8s-version-914355)       <model type='virtio'/>
	I0708 20:42:25.851213   49548 main.go:141] libmachine: (old-k8s-version-914355)     </interface>
	I0708 20:42:25.851225   49548 main.go:141] libmachine: (old-k8s-version-914355)     <serial type='pty'>
	I0708 20:42:25.851238   49548 main.go:141] libmachine: (old-k8s-version-914355)       <target port='0'/>
	I0708 20:42:25.851247   49548 main.go:141] libmachine: (old-k8s-version-914355)     </serial>
	I0708 20:42:25.851256   49548 main.go:141] libmachine: (old-k8s-version-914355)     <console type='pty'>
	I0708 20:42:25.851267   49548 main.go:141] libmachine: (old-k8s-version-914355)       <target type='serial' port='0'/>
	I0708 20:42:25.851279   49548 main.go:141] libmachine: (old-k8s-version-914355)     </console>
	I0708 20:42:25.851289   49548 main.go:141] libmachine: (old-k8s-version-914355)     <rng model='virtio'>
	I0708 20:42:25.851299   49548 main.go:141] libmachine: (old-k8s-version-914355)       <backend model='random'>/dev/random</backend>
	I0708 20:42:25.851310   49548 main.go:141] libmachine: (old-k8s-version-914355)     </rng>
	I0708 20:42:25.851326   49548 main.go:141] libmachine: (old-k8s-version-914355)     
	I0708 20:42:25.851337   49548 main.go:141] libmachine: (old-k8s-version-914355)     
	I0708 20:42:25.851349   49548 main.go:141] libmachine: (old-k8s-version-914355)   </devices>
	I0708 20:42:25.851359   49548 main.go:141] libmachine: (old-k8s-version-914355) </domain>
	I0708 20:42:25.851369   49548 main.go:141] libmachine: (old-k8s-version-914355) 
	I0708 20:42:25.857853   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:82:e8:be in network default
	I0708 20:42:25.858362   49548 main.go:141] libmachine: (old-k8s-version-914355) Ensuring networks are active...
	I0708 20:42:25.858387   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:25.859124   49548 main.go:141] libmachine: (old-k8s-version-914355) Ensuring network default is active
	I0708 20:42:25.859395   49548 main.go:141] libmachine: (old-k8s-version-914355) Ensuring network mk-old-k8s-version-914355 is active
	I0708 20:42:25.859850   49548 main.go:141] libmachine: (old-k8s-version-914355) Getting domain xml...
	I0708 20:42:25.860485   49548 main.go:141] libmachine: (old-k8s-version-914355) Creating domain...
	I0708 20:42:27.102045   49548 main.go:141] libmachine: (old-k8s-version-914355) Waiting to get IP...
	I0708 20:42:27.102870   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:27.103297   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:27.103338   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:27.103254   49893 retry.go:31] will retry after 249.892984ms: waiting for machine to come up
	I0708 20:42:27.354786   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:27.355340   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:27.355370   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:27.355299   49893 retry.go:31] will retry after 358.773579ms: waiting for machine to come up
	I0708 20:42:27.715911   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:27.716291   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:27.716323   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:27.716256   49893 retry.go:31] will retry after 431.539678ms: waiting for machine to come up
	I0708 20:42:28.149705   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:28.150206   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:28.150231   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:28.150172   49893 retry.go:31] will retry after 551.844202ms: waiting for machine to come up
	I0708 20:42:28.704026   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:28.704497   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:28.704526   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:28.704449   49893 retry.go:31] will retry after 577.352164ms: waiting for machine to come up
	I0708 20:42:29.283021   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:29.283546   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:29.283574   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:29.283492   49893 retry.go:31] will retry after 793.05533ms: waiting for machine to come up
	I0708 20:42:30.079510   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:30.079927   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:30.079948   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:30.079870   49893 retry.go:31] will retry after 1.101736353s: waiting for machine to come up
	I0708 20:42:31.183439   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:31.183843   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:31.183872   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:31.183822   49893 retry.go:31] will retry after 1.402564728s: waiting for machine to come up
	I0708 20:42:32.588423   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:32.588920   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:32.588953   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:32.588891   49893 retry.go:31] will retry after 1.70885095s: waiting for machine to come up
	I0708 20:42:34.299955   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:34.300495   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:34.300525   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:34.300445   49893 retry.go:31] will retry after 1.614081319s: waiting for machine to come up
	I0708 20:42:35.916022   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:35.916535   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:35.916569   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:35.916450   49893 retry.go:31] will retry after 2.187541225s: waiting for machine to come up
	I0708 20:42:38.106413   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:38.106973   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:38.107001   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:38.106928   49893 retry.go:31] will retry after 3.254816979s: waiting for machine to come up
	I0708 20:42:41.363037   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:41.363506   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:41.363535   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:41.363425   49893 retry.go:31] will retry after 3.336926102s: waiting for machine to come up
	I0708 20:42:44.703946   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:44.704352   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:42:44.704377   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:42:44.704326   49893 retry.go:31] will retry after 3.830446185s: waiting for machine to come up
	I0708 20:42:48.538767   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:48.539382   49548 main.go:141] libmachine: (old-k8s-version-914355) Found IP for machine: 192.168.50.65
	I0708 20:42:48.539405   49548 main.go:141] libmachine: (old-k8s-version-914355) Reserving static IP address...
	I0708 20:42:48.539418   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has current primary IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:48.539758   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-914355", mac: "52:54:00:2b:81:07", ip: "192.168.50.65"} in network mk-old-k8s-version-914355
	I0708 20:42:48.615094   49548 main.go:141] libmachine: (old-k8s-version-914355) Reserved static IP address: 192.168.50.65
	I0708 20:42:48.615127   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Getting to WaitForSSH function...
	I0708 20:42:48.615138   49548 main.go:141] libmachine: (old-k8s-version-914355) Waiting for SSH to be available...
	I0708 20:42:48.617629   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:48.617950   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:48.617971   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:48.618119   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Using SSH client type: external
	I0708 20:42:48.618144   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa (-rw-------)
	I0708 20:42:48.618215   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:42:48.618234   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | About to run SSH command:
	I0708 20:42:48.618253   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | exit 0
	I0708 20:42:48.747881   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | SSH cmd err, output: <nil>: 
	I0708 20:42:48.748171   49548 main.go:141] libmachine: (old-k8s-version-914355) KVM machine creation complete!
	I0708 20:42:48.748480   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetConfigRaw
	I0708 20:42:48.749109   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:42:48.749290   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:42:48.749472   49548 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 20:42:48.749489   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetState
	I0708 20:42:48.750651   49548 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 20:42:48.750671   49548 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 20:42:48.750677   49548 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 20:42:48.750683   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:48.752833   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:48.753192   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:48.753218   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:48.753307   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:42:48.753468   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:48.753626   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:48.753803   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:42:48.753975   49548 main.go:141] libmachine: Using SSH client type: native
	I0708 20:42:48.754227   49548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:42:48.754244   49548 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 20:42:48.862999   49548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:42:48.863022   49548 main.go:141] libmachine: Detecting the provisioner...
	I0708 20:42:48.863029   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:48.865728   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:48.866095   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:48.866121   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:48.866322   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:42:48.866564   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:48.866728   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:48.866896   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:42:48.867101   49548 main.go:141] libmachine: Using SSH client type: native
	I0708 20:42:48.867280   49548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:42:48.867292   49548 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 20:42:48.980282   49548 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 20:42:48.980357   49548 main.go:141] libmachine: found compatible host: buildroot
	I0708 20:42:48.980366   49548 main.go:141] libmachine: Provisioning with buildroot...
	I0708 20:42:48.980373   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetMachineName
	I0708 20:42:48.980655   49548 buildroot.go:166] provisioning hostname "old-k8s-version-914355"
	I0708 20:42:48.980680   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetMachineName
	I0708 20:42:48.980874   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:48.983703   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:48.984050   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:48.984078   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:48.984192   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:42:48.984380   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:48.984565   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:48.984716   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:42:48.984916   49548 main.go:141] libmachine: Using SSH client type: native
	I0708 20:42:48.985202   49548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:42:48.985233   49548 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-914355 && echo "old-k8s-version-914355" | sudo tee /etc/hostname
	I0708 20:42:49.111669   49548 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-914355
	
	I0708 20:42:49.111692   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:49.114378   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.114708   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:49.114738   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.114880   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:42:49.115068   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:49.115252   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:49.115409   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:42:49.115589   49548 main.go:141] libmachine: Using SSH client type: native
	I0708 20:42:49.115810   49548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:42:49.115839   49548 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-914355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-914355/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-914355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:42:49.237458   49548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:42:49.237484   49548 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:42:49.237509   49548 buildroot.go:174] setting up certificates
	I0708 20:42:49.237521   49548 provision.go:84] configureAuth start
	I0708 20:42:49.237529   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetMachineName
	I0708 20:42:49.237865   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetIP
	I0708 20:42:49.240401   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.240769   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:49.240796   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.240894   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:49.243053   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.243397   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:49.243424   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.243571   49548 provision.go:143] copyHostCerts
	I0708 20:42:49.243625   49548 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:42:49.243633   49548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:42:49.243687   49548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:42:49.243781   49548 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:42:49.243789   49548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:42:49.243807   49548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:42:49.243878   49548 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:42:49.243885   49548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:42:49.243901   49548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:42:49.243975   49548 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-914355 san=[127.0.0.1 192.168.50.65 localhost minikube old-k8s-version-914355]
	I0708 20:42:49.301351   49548 provision.go:177] copyRemoteCerts
	I0708 20:42:49.301400   49548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:42:49.301421   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:49.304132   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.304448   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:49.304471   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.304715   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:42:49.304898   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:49.305032   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:42:49.305143   49548 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa Username:docker}
	I0708 20:42:49.390497   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:42:49.418801   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0708 20:42:49.443683   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:42:49.468129   49548 provision.go:87] duration metric: took 230.597717ms to configureAuth
	I0708 20:42:49.468164   49548 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:42:49.468334   49548 config.go:182] Loaded profile config "old-k8s-version-914355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0708 20:42:49.468396   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:49.471137   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.471481   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:49.471508   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.471686   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:42:49.471889   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:49.472085   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:49.472239   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:42:49.472434   49548 main.go:141] libmachine: Using SSH client type: native
	I0708 20:42:49.472636   49548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:42:49.472652   49548 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:42:49.746077   49548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:42:49.746110   49548 main.go:141] libmachine: Checking connection to Docker...
	I0708 20:42:49.746137   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetURL
	I0708 20:42:49.747548   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | Using libvirt version 6000000
	I0708 20:42:49.749755   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.750103   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:49.750128   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.750346   49548 main.go:141] libmachine: Docker is up and running!
	I0708 20:42:49.750361   49548 main.go:141] libmachine: Reticulating splines...
	I0708 20:42:49.750368   49548 client.go:171] duration metric: took 24.303285291s to LocalClient.Create
	I0708 20:42:49.750394   49548 start.go:167] duration metric: took 24.303346535s to libmachine.API.Create "old-k8s-version-914355"
	I0708 20:42:49.750406   49548 start.go:293] postStartSetup for "old-k8s-version-914355" (driver="kvm2")
	I0708 20:42:49.750418   49548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:42:49.750442   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:42:49.750735   49548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:42:49.750761   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:49.752945   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.753288   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:49.753313   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.753439   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:42:49.753655   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:49.753779   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:42:49.753914   49548 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa Username:docker}
	I0708 20:42:49.842461   49548 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:42:49.847510   49548 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:42:49.847540   49548 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:42:49.847597   49548 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:42:49.847668   49548 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:42:49.847757   49548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:42:49.857753   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:42:49.883444   49548 start.go:296] duration metric: took 133.025545ms for postStartSetup
	I0708 20:42:49.883513   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetConfigRaw
	I0708 20:42:49.884099   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetIP
	I0708 20:42:49.886672   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.886987   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:49.887007   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.887325   49548 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/config.json ...
	I0708 20:42:49.887593   49548 start.go:128] duration metric: took 24.462779193s to createHost
	I0708 20:42:49.887620   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:49.889899   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.890261   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:49.890288   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:49.890430   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:42:49.890608   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:49.890774   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:49.890881   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:42:49.891001   49548 main.go:141] libmachine: Using SSH client type: native
	I0708 20:42:49.891205   49548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:42:49.891217   49548 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0708 20:42:50.004372   49548 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720471369.979460236
	
	I0708 20:42:50.004392   49548 fix.go:216] guest clock: 1720471369.979460236
	I0708 20:42:50.004399   49548 fix.go:229] Guest: 2024-07-08 20:42:49.979460236 +0000 UTC Remote: 2024-07-08 20:42:49.887607855 +0000 UTC m=+50.285135825 (delta=91.852381ms)
	I0708 20:42:50.004425   49548 fix.go:200] guest clock delta is within tolerance: 91.852381ms
	I0708 20:42:50.004430   49548 start.go:83] releasing machines lock for "old-k8s-version-914355", held for 24.579769451s
	I0708 20:42:50.004452   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:42:50.004719   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetIP
	I0708 20:42:50.007608   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:50.008057   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:50.008087   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:50.008235   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:42:50.008891   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:42:50.009078   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:42:50.009171   49548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:42:50.009212   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:50.009328   49548 ssh_runner.go:195] Run: cat /version.json
	I0708 20:42:50.009354   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:42:50.011907   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:50.012096   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:50.012252   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:50.012280   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:50.012390   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:50.012413   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:50.012534   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:42:50.012616   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:42:50.012710   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:50.012769   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:42:50.012877   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:42:50.012945   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:42:50.013069   49548 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa Username:docker}
	I0708 20:42:50.013084   49548 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa Username:docker}
	I0708 20:42:50.100971   49548 ssh_runner.go:195] Run: systemctl --version
	I0708 20:42:50.124527   49548 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:42:50.295483   49548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:42:50.304328   49548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:42:50.304398   49548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:42:50.334102   49548 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:42:50.334130   49548 start.go:494] detecting cgroup driver to use...
	I0708 20:42:50.334193   49548 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:42:50.357335   49548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:42:50.375483   49548 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:42:50.375556   49548 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:42:50.391624   49548 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:42:50.406680   49548 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:42:50.531672   49548 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:42:50.675426   49548 docker.go:233] disabling docker service ...
	I0708 20:42:50.675535   49548 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:42:50.690778   49548 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:42:50.704923   49548 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:42:50.852221   49548 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:42:50.978546   49548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:42:50.993085   49548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:42:51.012824   49548 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0708 20:42:51.012899   49548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:42:51.024126   49548 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:42:51.024223   49548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:42:51.035039   49548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:42:51.045885   49548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:42:51.057016   49548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:42:51.067751   49548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:42:51.077198   49548 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:42:51.077249   49548 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:42:51.090550   49548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:42:51.099907   49548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:42:51.246988   49548 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:42:51.394500   49548 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:42:51.394568   49548 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:42:51.399389   49548 start.go:562] Will wait 60s for crictl version
	I0708 20:42:51.399465   49548 ssh_runner.go:195] Run: which crictl
	I0708 20:42:51.403103   49548 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:42:51.445367   49548 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:42:51.445459   49548 ssh_runner.go:195] Run: crio --version
	I0708 20:42:51.474316   49548 ssh_runner.go:195] Run: crio --version
	I0708 20:42:51.507594   49548 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0708 20:42:51.508747   49548 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetIP
	I0708 20:42:51.511720   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:51.512220   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:42:39 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:42:51.512330   49548 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:42:51.513613   49548 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0708 20:42:51.518047   49548 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:42:51.531104   49548 kubeadm.go:877] updating cluster {Name:old-k8s-version-914355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-914355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:42:51.531233   49548 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0708 20:42:51.531304   49548 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:42:51.563806   49548 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0708 20:42:51.563887   49548 ssh_runner.go:195] Run: which lz4
	I0708 20:42:51.568121   49548 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0708 20:42:51.572424   49548 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:42:51.572463   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0708 20:42:53.377051   49548 crio.go:462] duration metric: took 1.808954524s to copy over tarball
	I0708 20:42:53.377147   49548 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:42:56.100761   49548 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.723587759s)
	I0708 20:42:56.100786   49548 crio.go:469] duration metric: took 2.723698222s to extract the tarball
	I0708 20:42:56.100795   49548 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:42:56.144012   49548 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:42:56.195420   49548 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0708 20:42:56.195478   49548 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 20:42:56.195570   49548 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0708 20:42:56.195570   49548 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0708 20:42:56.195628   49548 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0708 20:42:56.195701   49548 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0708 20:42:56.195711   49548 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0708 20:42:56.195724   49548 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 20:42:56.195909   49548 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0708 20:42:56.195575   49548 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:42:56.197883   49548 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0708 20:42:56.197943   49548 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 20:42:56.197959   49548 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0708 20:42:56.198054   49548 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:42:56.198221   49548 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0708 20:42:56.198315   49548 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0708 20:42:56.198383   49548 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0708 20:42:56.198479   49548 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0708 20:42:56.363014   49548 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0708 20:42:56.364532   49548 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 20:42:56.366867   49548 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0708 20:42:56.375227   49548 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0708 20:42:56.383467   49548 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0708 20:42:56.390103   49548 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0708 20:42:56.433607   49548 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0708 20:42:56.474024   49548 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0708 20:42:56.474074   49548 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0708 20:42:56.474148   49548 ssh_runner.go:195] Run: which crictl
	I0708 20:42:56.494360   49548 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:42:56.594131   49548 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0708 20:42:56.594195   49548 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0708 20:42:56.594265   49548 ssh_runner.go:195] Run: which crictl
	I0708 20:42:56.594528   49548 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0708 20:42:56.594564   49548 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 20:42:56.594609   49548 ssh_runner.go:195] Run: which crictl
	I0708 20:42:56.656054   49548 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0708 20:42:56.656108   49548 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0708 20:42:56.656122   49548 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0708 20:42:56.656156   49548 ssh_runner.go:195] Run: which crictl
	I0708 20:42:56.656163   49548 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0708 20:42:56.656199   49548 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0708 20:42:56.656217   49548 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0708 20:42:56.656228   49548 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0708 20:42:56.656205   49548 ssh_runner.go:195] Run: which crictl
	I0708 20:42:56.656273   49548 ssh_runner.go:195] Run: which crictl
	I0708 20:42:56.656369   49548 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0708 20:42:56.656401   49548 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0708 20:42:56.656428   49548 ssh_runner.go:195] Run: which crictl
	I0708 20:42:56.750305   49548 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0708 20:42:56.750339   49548 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 20:42:56.750369   49548 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0708 20:42:56.750416   49548 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0708 20:42:56.750479   49548 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0708 20:42:56.750480   49548 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0708 20:42:56.750531   49548 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0708 20:42:56.895197   49548 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0708 20:42:56.895201   49548 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0708 20:42:56.895233   49548 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0708 20:42:56.895280   49548 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0708 20:42:56.895350   49548 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0708 20:42:56.895366   49548 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0708 20:42:56.895407   49548 cache_images.go:92] duration metric: took 699.908872ms to LoadCachedImages
	W0708 20:42:56.895504   49548 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0708 20:42:56.895520   49548 kubeadm.go:928] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0708 20:42:56.895626   49548 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-914355 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:42:56.895697   49548 ssh_runner.go:195] Run: crio config
	I0708 20:42:56.947882   49548 cni.go:84] Creating CNI manager for ""
	I0708 20:42:56.947906   49548 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:42:56.947917   49548 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:42:56.947939   49548 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-914355 NodeName:old-k8s-version-914355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0708 20:42:56.948136   49548 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-914355"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:42:56.948209   49548 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0708 20:42:56.958425   49548 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:42:56.958518   49548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:42:56.968850   49548 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0708 20:42:56.988974   49548 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:42:57.006757   49548 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0708 20:42:57.024356   49548 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0708 20:42:57.028341   49548 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:42:57.043488   49548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:42:57.182615   49548 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:42:57.203459   49548 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355 for IP: 192.168.50.65
	I0708 20:42:57.203479   49548 certs.go:194] generating shared ca certs ...
	I0708 20:42:57.203499   49548 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:42:57.203666   49548 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:42:57.203711   49548 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:42:57.203724   49548 certs.go:256] generating profile certs ...
	I0708 20:42:57.203788   49548 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.key
	I0708 20:42:57.203807   49548 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt with IP's: []
	I0708 20:42:57.440280   49548 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt ...
	I0708 20:42:57.440309   49548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: {Name:mk5ee4bafa2efd5e3315abd3a8f2fde36349f853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:42:57.440490   49548 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.key ...
	I0708 20:42:57.440507   49548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.key: {Name:mk4a3080546a28b70f60e69509ab7f2b767fab65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:42:57.440604   49548 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.key.8b45f3cf
	I0708 20:42:57.440625   49548 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.crt.8b45f3cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.65]
	I0708 20:42:57.547903   49548 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.crt.8b45f3cf ...
	I0708 20:42:57.547928   49548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.crt.8b45f3cf: {Name:mk985889685bc048e5de4e139c5e0e204642578c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:42:57.573911   49548 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.key.8b45f3cf ...
	I0708 20:42:57.573946   49548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.key.8b45f3cf: {Name:mk9c23c12f25bc45248a46018eb3a7e5069d2eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:42:57.574099   49548 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.crt.8b45f3cf -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.crt
	I0708 20:42:57.574202   49548 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.key.8b45f3cf -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.key
	I0708 20:42:57.574289   49548 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.key
	I0708 20:42:57.574314   49548 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.crt with IP's: []
	I0708 20:42:57.669852   49548 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.crt ...
	I0708 20:42:57.669890   49548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.crt: {Name:mk57b71a28602b45f8aeaac279fd5fdd91885fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:42:57.672809   49548 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.key ...
	I0708 20:42:57.672834   49548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.key: {Name:mkb7adce6c84f5a933251defb6dd4886bfbffb9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:42:57.673089   49548 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:42:57.673129   49548 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:42:57.673138   49548 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:42:57.673169   49548 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:42:57.673198   49548 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:42:57.673222   49548 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:42:57.673276   49548 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:42:57.674102   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:42:57.703925   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:42:57.738922   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:42:57.772060   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:42:57.806293   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0708 20:42:57.832407   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:42:57.863226   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:42:57.894698   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 20:42:57.926708   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:42:57.952023   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:42:57.980678   49548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:42:58.010345   49548 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:42:58.043220   49548 ssh_runner.go:195] Run: openssl version
	I0708 20:42:58.052769   49548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:42:58.072686   49548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:42:58.078418   49548 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:42:58.078490   49548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:42:58.088283   49548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:42:58.108668   49548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:42:58.120345   49548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:42:58.126237   49548 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:42:58.126302   49548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:42:58.132187   49548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:42:58.146509   49548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:42:58.158331   49548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:42:58.163111   49548 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:42:58.163166   49548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:42:58.169554   49548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:42:58.180923   49548 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:42:58.185235   49548 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 20:42:58.185304   49548 kubeadm.go:391] StartCluster: {Name:old-k8s-version-914355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-914355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:42:58.185414   49548 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:42:58.185470   49548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:42:58.223466   49548 cri.go:89] found id: ""
	I0708 20:42:58.223543   49548 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 20:42:58.233824   49548 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:42:58.243946   49548 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:42:58.254849   49548 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:42:58.254870   49548 kubeadm.go:156] found existing configuration files:
	
	I0708 20:42:58.254921   49548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:42:58.264304   49548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:42:58.264374   49548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:42:58.274292   49548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:42:58.283289   49548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:42:58.283353   49548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:42:58.292750   49548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:42:58.303733   49548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:42:58.303794   49548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:42:58.315260   49548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:42:58.326630   49548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:42:58.326701   49548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:42:58.338778   49548 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 20:42:58.453681   49548 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:42:58.453766   49548 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:42:58.617299   49548 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:42:58.617447   49548 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:42:58.617629   49548 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:42:58.851240   49548 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:42:58.869203   49548 out.go:204]   - Generating certificates and keys ...
	I0708 20:42:58.869339   49548 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:42:58.869435   49548 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:42:59.237213   49548 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 20:42:59.350500   49548 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 20:42:59.500921   49548 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 20:42:59.850576   49548 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 20:43:00.047537   49548 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 20:43:00.047913   49548 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-914355] and IPs [192.168.50.65 127.0.0.1 ::1]
	I0708 20:43:00.140569   49548 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 20:43:00.141092   49548 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-914355] and IPs [192.168.50.65 127.0.0.1 ::1]
	I0708 20:43:00.231791   49548 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 20:43:00.326810   49548 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 20:43:00.628298   49548 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 20:43:00.628623   49548 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:43:00.859430   49548 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:43:01.136595   49548 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:43:01.350645   49548 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:43:01.583446   49548 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:43:01.603190   49548 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:43:01.605643   49548 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:43:01.605802   49548 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:43:01.758234   49548 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:43:01.760286   49548 out.go:204]   - Booting up control plane ...
	I0708 20:43:01.760414   49548 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:43:01.772063   49548 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:43:01.774478   49548 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:43:01.779337   49548 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:43:01.789994   49548 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:43:41.785682   49548 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:43:41.785833   49548 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:43:41.786069   49548 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:43:46.786832   49548 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:43:46.787125   49548 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:43:56.786326   49548 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:43:56.786522   49548 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:44:16.786912   49548 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:44:16.787210   49548 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:44:56.788854   49548 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:44:56.789129   49548 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:44:56.789154   49548 kubeadm.go:309] 
	I0708 20:44:56.789208   49548 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:44:56.789291   49548 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:44:56.789320   49548 kubeadm.go:309] 
	I0708 20:44:56.789368   49548 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:44:56.789424   49548 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:44:56.789566   49548 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:44:56.789578   49548 kubeadm.go:309] 
	I0708 20:44:56.789850   49548 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:44:56.789910   49548 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:44:56.789957   49548 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:44:56.789967   49548 kubeadm.go:309] 
	I0708 20:44:56.790112   49548 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:44:56.790213   49548 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:44:56.790223   49548 kubeadm.go:309] 
	I0708 20:44:56.790362   49548 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:44:56.790487   49548 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:44:56.790612   49548 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:44:56.790715   49548 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:44:56.790728   49548 kubeadm.go:309] 
	I0708 20:44:56.791596   49548 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 20:44:56.791716   49548 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:44:56.791813   49548 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0708 20:44:56.791964   49548 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-914355] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-914355] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-914355] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-914355] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0708 20:44:56.792023   49548 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 20:44:57.301619   49548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:44:57.324476   49548 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:44:57.345873   49548 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:44:57.345900   49548 kubeadm.go:156] found existing configuration files:
	
	I0708 20:44:57.345954   49548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:44:57.359918   49548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:44:57.359982   49548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:44:57.374581   49548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:44:57.388173   49548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:44:57.388257   49548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:44:57.405746   49548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:44:57.426336   49548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:44:57.426427   49548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:44:57.443309   49548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:44:57.458595   49548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:44:57.458651   49548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:44:57.474730   49548 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 20:44:57.571864   49548 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:44:57.571939   49548 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:44:57.755781   49548 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:44:57.755919   49548 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:44:57.756040   49548 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:44:58.002184   49548 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:44:58.062515   49548 out.go:204]   - Generating certificates and keys ...
	I0708 20:44:58.062631   49548 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:44:58.062692   49548 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:44:58.062800   49548 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:44:58.062914   49548 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:44:58.063027   49548 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:44:58.063102   49548 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:44:58.063189   49548 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:44:58.063281   49548 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:44:58.063371   49548 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:44:58.063481   49548 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:44:58.063549   49548 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:44:58.063627   49548 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:44:58.063698   49548 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:44:58.159631   49548 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:44:58.378949   49548 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:44:58.531648   49548 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:44:58.555187   49548 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:44:58.556499   49548 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:44:58.556563   49548 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:44:58.700344   49548 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:44:58.702216   49548 out.go:204]   - Booting up control plane ...
	I0708 20:44:58.702343   49548 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:44:58.706660   49548 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:44:58.707710   49548 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:44:58.708505   49548 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:44:58.710757   49548 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:45:38.714861   49548 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:45:38.715124   49548 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:45:38.715421   49548 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:45:43.715777   49548 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:45:43.716020   49548 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:45:53.716805   49548 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:45:53.717092   49548 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:46:13.715957   49548 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:46:13.716266   49548 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:46:53.715783   49548 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:46:53.716078   49548 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:46:53.716103   49548 kubeadm.go:309] 
	I0708 20:46:53.716169   49548 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:46:53.716224   49548 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:46:53.716234   49548 kubeadm.go:309] 
	I0708 20:46:53.716298   49548 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:46:53.716350   49548 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:46:53.716476   49548 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:46:53.716484   49548 kubeadm.go:309] 
	I0708 20:46:53.716613   49548 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:46:53.716654   49548 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:46:53.716695   49548 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:46:53.716701   49548 kubeadm.go:309] 
	I0708 20:46:53.716845   49548 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:46:53.716956   49548 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:46:53.716963   49548 kubeadm.go:309] 
	I0708 20:46:53.717110   49548 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:46:53.717235   49548 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:46:53.717328   49548 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:46:53.717421   49548 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:46:53.717430   49548 kubeadm.go:309] 
	I0708 20:46:53.718420   49548 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 20:46:53.718526   49548 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:46:53.718585   49548 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:46:53.718664   49548 kubeadm.go:393] duration metric: took 3m55.533363646s to StartCluster
	I0708 20:46:53.718715   49548 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:46:53.718778   49548 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:46:53.798556   49548 cri.go:89] found id: ""
	I0708 20:46:53.798591   49548 logs.go:276] 0 containers: []
	W0708 20:46:53.798602   49548 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:46:53.798609   49548 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:46:53.798672   49548 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:46:53.873765   49548 cri.go:89] found id: ""
	I0708 20:46:53.873792   49548 logs.go:276] 0 containers: []
	W0708 20:46:53.873802   49548 logs.go:278] No container was found matching "etcd"
	I0708 20:46:53.873809   49548 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:46:53.873860   49548 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:46:53.924745   49548 cri.go:89] found id: ""
	I0708 20:46:53.924767   49548 logs.go:276] 0 containers: []
	W0708 20:46:53.924781   49548 logs.go:278] No container was found matching "coredns"
	I0708 20:46:53.924788   49548 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:46:53.924839   49548 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:46:53.981130   49548 cri.go:89] found id: ""
	I0708 20:46:53.981155   49548 logs.go:276] 0 containers: []
	W0708 20:46:53.981167   49548 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:46:53.981175   49548 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:46:53.981242   49548 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:46:54.057038   49548 cri.go:89] found id: ""
	I0708 20:46:54.057065   49548 logs.go:276] 0 containers: []
	W0708 20:46:54.057080   49548 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:46:54.057087   49548 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:46:54.057149   49548 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:46:54.116710   49548 cri.go:89] found id: ""
	I0708 20:46:54.116734   49548 logs.go:276] 0 containers: []
	W0708 20:46:54.116744   49548 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:46:54.116752   49548 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:46:54.116812   49548 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:46:54.168323   49548 cri.go:89] found id: ""
	I0708 20:46:54.168349   49548 logs.go:276] 0 containers: []
	W0708 20:46:54.168359   49548 logs.go:278] No container was found matching "kindnet"
	I0708 20:46:54.168370   49548 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:46:54.168383   49548 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:46:54.297128   49548 logs.go:123] Gathering logs for container status ...
	I0708 20:46:54.297160   49548 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:46:54.366966   49548 logs.go:123] Gathering logs for kubelet ...
	I0708 20:46:54.366999   49548 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:46:54.446027   49548 logs.go:123] Gathering logs for dmesg ...
	I0708 20:46:54.446068   49548 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:46:54.466178   49548 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:46:54.466224   49548 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:46:54.634729   49548 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0708 20:46:54.634801   49548 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0708 20:46:54.634869   49548 out.go:239] * 
	* 
	W0708 20:46:54.634935   49548 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:46:54.634966   49548 out.go:239] * 
	* 
	W0708 20:46:54.636037   49548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:46:54.639509   49548 out.go:177] 
	W0708 20:46:54.640777   49548 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:46:54.640857   49548 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0708 20:46:54.640888   49548 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0708 20:46:54.642393   49548 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-914355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 6 (284.412735ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:46:54.970568   55831 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-914355" does not appear in /home/jenkins/minikube-integration/19195-5988/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-914355" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (295.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-914355 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-914355 create -f testdata/busybox.yaml: exit status 1 (63.444203ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-914355" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-914355 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 6 (264.379798ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:46:55.298445   55895 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-914355" does not appear in /home/jenkins/minikube-integration/19195-5988/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-914355" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 6 (263.940026ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:46:55.560298   55956 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-914355" does not appear in /home/jenkins/minikube-integration/19195-5988/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-914355" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (111.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-914355 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-914355 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m51.517697917s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-914355 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-914355 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-914355 describe deploy/metrics-server -n kube-system: exit status 1 (45.635087ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-914355" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-914355 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 6 (221.971918ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:48:47.351848   57324 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-914355" does not appear in /home/jenkins/minikube-integration/19195-5988/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-914355" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (111.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-028021 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-028021 --alsologtostderr -v=3: exit status 82 (2m0.515790673s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-028021"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:48:24.748275   57216 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:48:24.748398   57216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:48:24.748409   57216 out.go:304] Setting ErrFile to fd 2...
	I0708 20:48:24.748416   57216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:48:24.748663   57216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:48:24.748949   57216 out.go:298] Setting JSON to false
	I0708 20:48:24.749026   57216 mustload.go:65] Loading cluster: no-preload-028021
	I0708 20:48:24.749353   57216 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:48:24.749438   57216 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/config.json ...
	I0708 20:48:24.749618   57216 mustload.go:65] Loading cluster: no-preload-028021
	I0708 20:48:24.749740   57216 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:48:24.749772   57216 stop.go:39] StopHost: no-preload-028021
	I0708 20:48:24.750309   57216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:48:24.750355   57216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:48:24.765270   57216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0708 20:48:24.765680   57216 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:48:24.766311   57216 main.go:141] libmachine: Using API Version  1
	I0708 20:48:24.766342   57216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:48:24.766707   57216 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:48:24.769077   57216 out.go:177] * Stopping node "no-preload-028021"  ...
	I0708 20:48:24.770571   57216 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0708 20:48:24.770596   57216 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:48:24.770813   57216 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0708 20:48:24.770851   57216 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:48:24.773586   57216 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:48:24.773987   57216 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:47:14 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:48:24.774025   57216 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:48:24.774257   57216 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:48:24.774433   57216 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:48:24.774566   57216 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:48:24.774695   57216 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:48:24.879529   57216 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0708 20:48:24.945738   57216 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0708 20:48:25.009504   57216 main.go:141] libmachine: Stopping "no-preload-028021"...
	I0708 20:48:25.009537   57216 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:48:25.011276   57216 main.go:141] libmachine: (no-preload-028021) Calling .Stop
	I0708 20:48:25.015088   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 0/120
	I0708 20:48:26.016497   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 1/120
	I0708 20:48:27.018106   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 2/120
	I0708 20:48:28.019388   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 3/120
	I0708 20:48:29.020854   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 4/120
	I0708 20:48:30.023336   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 5/120
	I0708 20:48:31.024733   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 6/120
	I0708 20:48:32.026587   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 7/120
	I0708 20:48:33.028194   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 8/120
	I0708 20:48:34.029916   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 9/120
	I0708 20:48:35.031576   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 10/120
	I0708 20:48:36.032809   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 11/120
	I0708 20:48:37.034088   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 12/120
	I0708 20:48:38.035482   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 13/120
	I0708 20:48:39.036799   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 14/120
	I0708 20:48:40.038463   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 15/120
	I0708 20:48:41.039739   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 16/120
	I0708 20:48:42.041773   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 17/120
	I0708 20:48:43.043055   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 18/120
	I0708 20:48:44.044465   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 19/120
	I0708 20:48:45.047202   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 20/120
	I0708 20:48:46.049183   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 21/120
	I0708 20:48:47.050686   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 22/120
	I0708 20:48:48.052743   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 23/120
	I0708 20:48:49.054178   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 24/120
	I0708 20:48:50.056195   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 25/120
	I0708 20:48:51.057860   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 26/120
	I0708 20:48:52.059237   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 27/120
	I0708 20:48:53.061458   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 28/120
	I0708 20:48:54.062737   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 29/120
	I0708 20:48:55.064839   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 30/120
	I0708 20:48:56.065965   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 31/120
	I0708 20:48:57.067247   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 32/120
	I0708 20:48:58.068616   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 33/120
	I0708 20:48:59.070890   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 34/120
	I0708 20:49:00.072987   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 35/120
	I0708 20:49:01.074714   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 36/120
	I0708 20:49:02.076405   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 37/120
	I0708 20:49:03.078385   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 38/120
	I0708 20:49:04.079881   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 39/120
	I0708 20:49:05.082271   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 40/120
	I0708 20:49:06.083867   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 41/120
	I0708 20:49:07.085925   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 42/120
	I0708 20:49:08.087264   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 43/120
	I0708 20:49:09.088671   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 44/120
	I0708 20:49:10.090556   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 45/120
	I0708 20:49:11.091965   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 46/120
	I0708 20:49:12.094282   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 47/120
	I0708 20:49:13.096510   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 48/120
	I0708 20:49:14.099004   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 49/120
	I0708 20:49:15.101251   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 50/120
	I0708 20:49:16.102709   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 51/120
	I0708 20:49:17.104325   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 52/120
	I0708 20:49:18.105460   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 53/120
	I0708 20:49:19.106706   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 54/120
	I0708 20:49:20.108020   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 55/120
	I0708 20:49:21.110284   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 56/120
	I0708 20:49:22.111519   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 57/120
	I0708 20:49:23.112718   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 58/120
	I0708 20:49:24.113843   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 59/120
	I0708 20:49:25.115912   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 60/120
	I0708 20:49:26.118110   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 61/120
	I0708 20:49:27.119443   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 62/120
	I0708 20:49:28.121543   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 63/120
	I0708 20:49:29.122744   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 64/120
	I0708 20:49:30.124615   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 65/120
	I0708 20:49:31.125871   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 66/120
	I0708 20:49:32.127025   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 67/120
	I0708 20:49:33.128241   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 68/120
	I0708 20:49:34.129506   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 69/120
	I0708 20:49:35.132059   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 70/120
	I0708 20:49:36.133904   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 71/120
	I0708 20:49:37.135249   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 72/120
	I0708 20:49:38.136593   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 73/120
	I0708 20:49:39.137912   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 74/120
	I0708 20:49:40.139861   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 75/120
	I0708 20:49:41.141896   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 76/120
	I0708 20:49:42.143298   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 77/120
	I0708 20:49:43.144587   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 78/120
	I0708 20:49:44.146033   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 79/120
	I0708 20:49:45.148056   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 80/120
	I0708 20:49:46.149643   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 81/120
	I0708 20:49:47.151939   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 82/120
	I0708 20:49:48.153862   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 83/120
	I0708 20:49:49.155735   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 84/120
	I0708 20:49:50.157765   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 85/120
	I0708 20:49:51.159102   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 86/120
	I0708 20:49:52.160650   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 87/120
	I0708 20:49:53.161978   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 88/120
	I0708 20:49:54.163413   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 89/120
	I0708 20:49:55.165283   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 90/120
	I0708 20:49:56.167444   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 91/120
	I0708 20:49:57.168912   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 92/120
	I0708 20:49:58.170331   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 93/120
	I0708 20:49:59.171728   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 94/120
	I0708 20:50:00.173874   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 95/120
	I0708 20:50:01.175512   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 96/120
	I0708 20:50:02.176824   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 97/120
	I0708 20:50:03.178182   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 98/120
	I0708 20:50:04.179599   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 99/120
	I0708 20:50:05.181933   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 100/120
	I0708 20:50:06.183334   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 101/120
	I0708 20:50:07.184980   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 102/120
	I0708 20:50:08.186793   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 103/120
	I0708 20:50:09.188344   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 104/120
	I0708 20:50:10.189993   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 105/120
	I0708 20:50:11.191199   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 106/120
	I0708 20:50:12.192416   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 107/120
	I0708 20:50:13.193975   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 108/120
	I0708 20:50:14.195503   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 109/120
	I0708 20:50:15.197603   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 110/120
	I0708 20:50:16.198990   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 111/120
	I0708 20:50:17.200539   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 112/120
	I0708 20:50:18.201838   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 113/120
	I0708 20:50:19.203058   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 114/120
	I0708 20:50:20.204820   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 115/120
	I0708 20:50:21.206286   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 116/120
	I0708 20:50:22.207477   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 117/120
	I0708 20:50:23.208878   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 118/120
	I0708 20:50:24.211098   57216 main.go:141] libmachine: (no-preload-028021) Waiting for machine to stop 119/120
	I0708 20:50:25.212346   57216 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0708 20:50:25.212401   57216 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0708 20:50:25.214355   57216 out.go:177] 
	W0708 20:50:25.215982   57216 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0708 20:50:25.216034   57216 out.go:239] * 
	* 
	W0708 20:50:25.220598   57216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:50:25.222092   57216 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-028021 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028021 -n no-preload-028021
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028021 -n no-preload-028021: exit status 3 (18.619464987s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:50:43.843679   58425 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0708 20:50:43.843698   58425 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-028021" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (507.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-914355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-914355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m25.141078445s)

                                                
                                                
-- stdout --
	* [old-k8s-version-914355] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19195
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-914355" primary control-plane node in "old-k8s-version-914355" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-914355" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:48:51.940614   57466 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:48:51.940864   57466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:48:51.940873   57466 out.go:304] Setting ErrFile to fd 2...
	I0708 20:48:51.940877   57466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:48:51.941070   57466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:48:51.941590   57466 out.go:298] Setting JSON to false
	I0708 20:48:51.942519   57466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5481,"bootTime":1720466251,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:48:51.942579   57466 start.go:139] virtualization: kvm guest
	I0708 20:48:51.944930   57466 out.go:177] * [old-k8s-version-914355] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:48:51.946426   57466 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:48:51.946466   57466 notify.go:220] Checking for updates...
	I0708 20:48:51.949758   57466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:48:51.951061   57466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:48:51.952298   57466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:48:51.953458   57466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:48:51.954739   57466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:48:51.956381   57466 config.go:182] Loaded profile config "old-k8s-version-914355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0708 20:48:51.956838   57466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:48:51.956895   57466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:48:51.971806   57466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0708 20:48:51.972186   57466 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:48:51.972856   57466 main.go:141] libmachine: Using API Version  1
	I0708 20:48:51.972877   57466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:48:51.973236   57466 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:48:51.973405   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:48:51.975143   57466 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0708 20:48:51.976248   57466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:48:51.976549   57466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:48:51.976588   57466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:48:51.992533   57466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37279
	I0708 20:48:51.992898   57466 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:48:51.993317   57466 main.go:141] libmachine: Using API Version  1
	I0708 20:48:51.993336   57466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:48:51.993630   57466 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:48:51.993811   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:48:52.030412   57466 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:48:52.031872   57466 start.go:297] selected driver: kvm2
	I0708 20:48:52.031889   57466 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-914355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:48:52.032039   57466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:48:52.032791   57466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:48:52.032886   57466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:48:52.048440   57466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:48:52.048813   57466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:48:52.048853   57466 cni.go:84] Creating CNI manager for ""
	I0708 20:48:52.048868   57466 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:48:52.048909   57466 start.go:340] cluster config:
	{Name:old-k8s-version-914355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914355 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:48:52.049012   57466 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:48:52.051553   57466 out.go:177] * Starting "old-k8s-version-914355" primary control-plane node in "old-k8s-version-914355" cluster
	I0708 20:48:52.052952   57466 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0708 20:48:52.052990   57466 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0708 20:48:52.052998   57466 cache.go:56] Caching tarball of preloaded images
	I0708 20:48:52.053081   57466 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:48:52.053091   57466 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0708 20:48:52.053198   57466 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/config.json ...
	I0708 20:48:52.053383   57466 start.go:360] acquireMachinesLock for old-k8s-version-914355: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:48:52.053424   57466 start.go:364] duration metric: took 21.362µs to acquireMachinesLock for "old-k8s-version-914355"
	I0708 20:48:52.053447   57466 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:48:52.053479   57466 fix.go:54] fixHost starting: 
	I0708 20:48:52.053730   57466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:48:52.053758   57466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:48:52.069037   57466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40923
	I0708 20:48:52.069492   57466 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:48:52.069934   57466 main.go:141] libmachine: Using API Version  1
	I0708 20:48:52.069978   57466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:48:52.070346   57466 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:48:52.070548   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:48:52.070696   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetState
	I0708 20:48:52.072353   57466 fix.go:112] recreateIfNeeded on old-k8s-version-914355: state=Stopped err=<nil>
	I0708 20:48:52.072373   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	W0708 20:48:52.072524   57466 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:48:52.074427   57466 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-914355" ...
	I0708 20:48:52.075621   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .Start
	I0708 20:48:52.075772   57466 main.go:141] libmachine: (old-k8s-version-914355) Ensuring networks are active...
	I0708 20:48:52.076587   57466 main.go:141] libmachine: (old-k8s-version-914355) Ensuring network default is active
	I0708 20:48:52.077078   57466 main.go:141] libmachine: (old-k8s-version-914355) Ensuring network mk-old-k8s-version-914355 is active
	I0708 20:48:52.077763   57466 main.go:141] libmachine: (old-k8s-version-914355) Getting domain xml...
	I0708 20:48:52.078630   57466 main.go:141] libmachine: (old-k8s-version-914355) Creating domain...
	I0708 20:48:53.324069   57466 main.go:141] libmachine: (old-k8s-version-914355) Waiting to get IP...
	I0708 20:48:53.324800   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:48:53.325231   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:48:53.325307   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:48:53.325216   57501 retry.go:31] will retry after 259.83154ms: waiting for machine to come up
	I0708 20:48:53.586729   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:48:53.587153   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:48:53.587182   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:48:53.587100   57501 retry.go:31] will retry after 266.934071ms: waiting for machine to come up
	I0708 20:48:53.855671   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:48:53.856219   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:48:53.856246   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:48:53.856179   57501 retry.go:31] will retry after 405.58616ms: waiting for machine to come up
	I0708 20:48:54.263839   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:48:54.264295   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:48:54.264318   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:48:54.264263   57501 retry.go:31] will retry after 513.298749ms: waiting for machine to come up
	I0708 20:48:54.778751   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:48:54.779252   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:48:54.779283   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:48:54.779195   57501 retry.go:31] will retry after 723.904603ms: waiting for machine to come up
	I0708 20:48:55.505347   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:48:55.505977   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:48:55.505999   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:48:55.505927   57501 retry.go:31] will retry after 575.207236ms: waiting for machine to come up
	I0708 20:48:56.082725   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:48:56.083220   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:48:56.083254   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:48:56.083173   57501 retry.go:31] will retry after 1.013387057s: waiting for machine to come up
	I0708 20:48:57.098032   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:48:57.098568   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:48:57.098592   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:48:57.098535   57501 retry.go:31] will retry after 1.310592458s: waiting for machine to come up
	I0708 20:48:58.410337   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:48:58.410816   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:48:58.410842   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:48:58.410778   57501 retry.go:31] will retry after 1.463889343s: waiting for machine to come up
	I0708 20:48:59.876590   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:48:59.877080   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:48:59.877105   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:48:59.877051   57501 retry.go:31] will retry after 1.634758777s: waiting for machine to come up
	I0708 20:49:01.513699   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:01.514208   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:49:01.514248   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:49:01.514167   57501 retry.go:31] will retry after 2.574465292s: waiting for machine to come up
	I0708 20:49:04.090364   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:04.090816   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:49:04.090840   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:49:04.090766   57501 retry.go:31] will retry after 3.171504815s: waiting for machine to come up
	I0708 20:49:07.264149   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:07.264576   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | unable to find current IP address of domain old-k8s-version-914355 in network mk-old-k8s-version-914355
	I0708 20:49:07.264619   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | I0708 20:49:07.264547   57501 retry.go:31] will retry after 2.955102474s: waiting for machine to come up
	I0708 20:49:10.221047   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.221557   57466 main.go:141] libmachine: (old-k8s-version-914355) Found IP for machine: 192.168.50.65
	I0708 20:49:10.221578   57466 main.go:141] libmachine: (old-k8s-version-914355) Reserving static IP address...
	I0708 20:49:10.221589   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has current primary IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.222133   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "old-k8s-version-914355", mac: "52:54:00:2b:81:07", ip: "192.168.50.65"} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:10.222180   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | skip adding static IP to network mk-old-k8s-version-914355 - found existing host DHCP lease matching {name: "old-k8s-version-914355", mac: "52:54:00:2b:81:07", ip: "192.168.50.65"}
	I0708 20:49:10.222197   57466 main.go:141] libmachine: (old-k8s-version-914355) Reserved static IP address: 192.168.50.65
	I0708 20:49:10.222214   57466 main.go:141] libmachine: (old-k8s-version-914355) Waiting for SSH to be available...
	I0708 20:49:10.222228   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | Getting to WaitForSSH function...
	I0708 20:49:10.224419   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.224803   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:10.224833   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.224967   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | Using SSH client type: external
	I0708 20:49:10.225004   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa (-rw-------)
	I0708 20:49:10.225038   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:49:10.225051   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | About to run SSH command:
	I0708 20:49:10.225064   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | exit 0
	I0708 20:49:10.356083   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | SSH cmd err, output: <nil>: 
	I0708 20:49:10.356516   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetConfigRaw
	I0708 20:49:10.357214   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetIP
	I0708 20:49:10.359777   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.360131   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:10.360167   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.360469   57466 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/config.json ...
	I0708 20:49:10.360710   57466 machine.go:94] provisionDockerMachine start ...
	I0708 20:49:10.360733   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:49:10.360972   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:49:10.363391   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.363777   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:10.363807   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.363878   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:49:10.364061   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:10.364213   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:10.364400   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:49:10.364566   57466 main.go:141] libmachine: Using SSH client type: native
	I0708 20:49:10.364748   57466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:49:10.364758   57466 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:49:10.480422   57466 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:49:10.480452   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetMachineName
	I0708 20:49:10.480706   57466 buildroot.go:166] provisioning hostname "old-k8s-version-914355"
	I0708 20:49:10.480740   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetMachineName
	I0708 20:49:10.480939   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:49:10.483767   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.484180   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:10.484204   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.484349   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:49:10.484544   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:10.484734   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:10.484844   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:49:10.484993   57466 main.go:141] libmachine: Using SSH client type: native
	I0708 20:49:10.485154   57466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:49:10.485165   57466 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-914355 && echo "old-k8s-version-914355" | sudo tee /etc/hostname
	I0708 20:49:10.619856   57466 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-914355
	
	I0708 20:49:10.619882   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:49:10.623184   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.623659   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:10.623699   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.623893   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:49:10.624087   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:10.624242   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:10.624362   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:49:10.624526   57466 main.go:141] libmachine: Using SSH client type: native
	I0708 20:49:10.624697   57466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:49:10.624715   57466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-914355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-914355/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-914355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:49:10.750953   57466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:49:10.750995   57466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:49:10.751018   57466 buildroot.go:174] setting up certificates
	I0708 20:49:10.751030   57466 provision.go:84] configureAuth start
	I0708 20:49:10.751042   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetMachineName
	I0708 20:49:10.751361   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetIP
	I0708 20:49:10.754389   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.754820   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:10.754836   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.755003   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:49:10.757244   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.757517   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:10.757559   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:10.757720   57466 provision.go:143] copyHostCerts
	I0708 20:49:10.757776   57466 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:49:10.757784   57466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:49:10.758008   57466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:49:10.758133   57466 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:49:10.758144   57466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:49:10.758175   57466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:49:10.758229   57466 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:49:10.758239   57466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:49:10.758260   57466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:49:10.758310   57466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-914355 san=[127.0.0.1 192.168.50.65 localhost minikube old-k8s-version-914355]
	I0708 20:49:11.033165   57466 provision.go:177] copyRemoteCerts
	I0708 20:49:11.033219   57466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:49:11.033251   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:49:11.036125   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.036417   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:11.036443   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.036633   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:49:11.036825   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:11.037002   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:49:11.037157   57466 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa Username:docker}
	I0708 20:49:11.126646   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:49:11.153154   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0708 20:49:11.178064   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 20:49:11.202501   57466 provision.go:87] duration metric: took 451.454444ms to configureAuth
	I0708 20:49:11.202539   57466 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:49:11.202912   57466 config.go:182] Loaded profile config "old-k8s-version-914355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0708 20:49:11.203008   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:49:11.206002   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.206489   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:11.206523   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.206788   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:49:11.206998   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:11.207159   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:11.207331   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:49:11.207529   57466 main.go:141] libmachine: Using SSH client type: native
	I0708 20:49:11.207729   57466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:49:11.207754   57466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:49:11.486993   57466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:49:11.487024   57466 machine.go:97] duration metric: took 1.126300315s to provisionDockerMachine
	I0708 20:49:11.487036   57466 start.go:293] postStartSetup for "old-k8s-version-914355" (driver="kvm2")
	I0708 20:49:11.487046   57466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:49:11.487062   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:49:11.487393   57466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:49:11.487427   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:49:11.490261   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.490616   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:11.490642   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.490882   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:49:11.491265   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:11.491469   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:49:11.491637   57466 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa Username:docker}
	I0708 20:49:11.579557   57466 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:49:11.583973   57466 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:49:11.584004   57466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:49:11.584074   57466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:49:11.584204   57466 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:49:11.584309   57466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:49:11.595946   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:49:11.619884   57466 start.go:296] duration metric: took 132.836455ms for postStartSetup
	I0708 20:49:11.619937   57466 fix.go:56] duration metric: took 19.566472358s for fixHost
	I0708 20:49:11.619962   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:49:11.622672   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.622918   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:11.622945   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.623087   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:49:11.623328   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:11.623550   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:11.623718   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:49:11.623901   57466 main.go:141] libmachine: Using SSH client type: native
	I0708 20:49:11.624061   57466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0708 20:49:11.624071   57466 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0708 20:49:11.744612   57466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720471751.719298115
	
	I0708 20:49:11.744634   57466 fix.go:216] guest clock: 1720471751.719298115
	I0708 20:49:11.744641   57466 fix.go:229] Guest: 2024-07-08 20:49:11.719298115 +0000 UTC Remote: 2024-07-08 20:49:11.619942616 +0000 UTC m=+19.713426296 (delta=99.355499ms)
	I0708 20:49:11.744698   57466 fix.go:200] guest clock delta is within tolerance: 99.355499ms
	I0708 20:49:11.744704   57466 start.go:83] releasing machines lock for "old-k8s-version-914355", held for 19.691272664s
	I0708 20:49:11.744725   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:49:11.745009   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetIP
	I0708 20:49:11.747912   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.748333   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:11.748362   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.748528   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:49:11.749003   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:49:11.749213   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .DriverName
	I0708 20:49:11.749293   57466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:49:11.749322   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:49:11.749432   57466 ssh_runner.go:195] Run: cat /version.json
	I0708 20:49:11.749452   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHHostname
	I0708 20:49:11.752351   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.752370   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.752743   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:11.752768   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.752796   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:11.752813   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:11.752932   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:49:11.753042   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHPort
	I0708 20:49:11.753131   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:11.753224   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHKeyPath
	I0708 20:49:11.753294   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:49:11.753360   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetSSHUsername
	I0708 20:49:11.753451   57466 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa Username:docker}
	I0708 20:49:11.753469   57466 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/old-k8s-version-914355/id_rsa Username:docker}
	I0708 20:49:11.866579   57466 ssh_runner.go:195] Run: systemctl --version
	I0708 20:49:11.873070   57466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:49:12.026526   57466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:49:12.033215   57466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:49:12.033298   57466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:49:12.053620   57466 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:49:12.053648   57466 start.go:494] detecting cgroup driver to use...
	I0708 20:49:12.053732   57466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:49:12.071247   57466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:49:12.087888   57466 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:49:12.087939   57466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:49:12.102747   57466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:49:12.117861   57466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:49:12.245271   57466 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:49:12.396452   57466 docker.go:233] disabling docker service ...
	I0708 20:49:12.396521   57466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:49:12.412541   57466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:49:12.432399   57466 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:49:12.597650   57466 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:49:12.729682   57466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:49:12.754436   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:49:12.774725   57466 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0708 20:49:12.774794   57466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:49:12.786321   57466 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:49:12.786411   57466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:49:12.797711   57466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:49:12.810318   57466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:49:12.822404   57466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:49:12.834952   57466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:49:12.845849   57466 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:49:12.845913   57466 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:49:12.862039   57466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:49:12.875633   57466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:49:13.018518   57466 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:49:13.194752   57466 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:49:13.194826   57466 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:49:13.201209   57466 start.go:562] Will wait 60s for crictl version
	I0708 20:49:13.201280   57466 ssh_runner.go:195] Run: which crictl
	I0708 20:49:13.205473   57466 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:49:13.249354   57466 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:49:13.249437   57466 ssh_runner.go:195] Run: crio --version
	I0708 20:49:13.282758   57466 ssh_runner.go:195] Run: crio --version
	I0708 20:49:13.314896   57466 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0708 20:49:13.316053   57466 main.go:141] libmachine: (old-k8s-version-914355) Calling .GetIP
	I0708 20:49:13.318761   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:13.319180   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:81:07", ip: ""} in network mk-old-k8s-version-914355: {Iface:virbr2 ExpiryTime:2024-07-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:2b:81:07 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-914355 Clientid:01:52:54:00:2b:81:07}
	I0708 20:49:13.319209   57466 main.go:141] libmachine: (old-k8s-version-914355) DBG | domain old-k8s-version-914355 has defined IP address 192.168.50.65 and MAC address 52:54:00:2b:81:07 in network mk-old-k8s-version-914355
	I0708 20:49:13.319441   57466 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0708 20:49:13.324004   57466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:49:13.337048   57466 kubeadm.go:877] updating cluster {Name:old-k8s-version-914355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-914355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:49:13.337190   57466 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0708 20:49:13.337257   57466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:49:13.388697   57466 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0708 20:49:13.388783   57466 ssh_runner.go:195] Run: which lz4
	I0708 20:49:13.393526   57466 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0708 20:49:13.398916   57466 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:49:13.398958   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0708 20:49:15.172080   57466 crio.go:462] duration metric: took 1.778592569s to copy over tarball
	I0708 20:49:15.172161   57466 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:49:18.093123   57466 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.920923408s)
	I0708 20:49:18.093155   57466 crio.go:469] duration metric: took 2.921048864s to extract the tarball
	I0708 20:49:18.093164   57466 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:49:18.136341   57466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:49:18.174170   57466 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0708 20:49:18.174199   57466 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 20:49:18.174253   57466 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:49:18.174277   57466 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0708 20:49:18.174324   57466 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 20:49:18.174362   57466 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0708 20:49:18.174411   57466 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0708 20:49:18.174392   57466 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0708 20:49:18.174389   57466 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0708 20:49:18.174417   57466 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0708 20:49:18.175833   57466 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:49:18.175835   57466 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0708 20:49:18.175832   57466 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0708 20:49:18.175832   57466 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0708 20:49:18.175832   57466 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0708 20:49:18.175839   57466 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0708 20:49:18.175838   57466 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 20:49:18.176176   57466 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0708 20:49:18.336513   57466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 20:49:18.337123   57466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0708 20:49:18.338114   57466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0708 20:49:18.348881   57466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0708 20:49:18.349226   57466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0708 20:49:18.355628   57466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0708 20:49:18.369731   57466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0708 20:49:18.466745   57466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:49:18.472848   57466 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0708 20:49:18.472906   57466 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 20:49:18.472959   57466 ssh_runner.go:195] Run: which crictl
	I0708 20:49:18.496874   57466 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0708 20:49:18.496918   57466 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0708 20:49:18.496966   57466 ssh_runner.go:195] Run: which crictl
	I0708 20:49:18.532064   57466 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0708 20:49:18.532119   57466 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0708 20:49:18.532171   57466 ssh_runner.go:195] Run: which crictl
	I0708 20:49:18.547328   57466 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0708 20:49:18.547366   57466 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0708 20:49:18.547391   57466 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0708 20:49:18.547403   57466 ssh_runner.go:195] Run: which crictl
	I0708 20:49:18.547422   57466 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0708 20:49:18.547480   57466 ssh_runner.go:195] Run: which crictl
	I0708 20:49:18.567792   57466 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0708 20:49:18.567827   57466 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0708 20:49:18.567873   57466 ssh_runner.go:195] Run: which crictl
	I0708 20:49:18.573161   57466 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0708 20:49:18.573212   57466 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0708 20:49:18.573256   57466 ssh_runner.go:195] Run: which crictl
	I0708 20:49:18.689790   57466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0708 20:49:18.689863   57466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0708 20:49:18.689956   57466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0708 20:49:18.689969   57466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0708 20:49:18.689959   57466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0708 20:49:18.690020   57466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0708 20:49:18.690031   57466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0708 20:49:18.813573   57466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0708 20:49:18.843015   57466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0708 20:49:18.843084   57466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0708 20:49:18.843122   57466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0708 20:49:18.843159   57466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0708 20:49:18.843218   57466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0708 20:49:18.843251   57466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0708 20:49:18.843288   57466 cache_images.go:92] duration metric: took 669.073789ms to LoadCachedImages
	W0708 20:49:18.843364   57466 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0708 20:49:18.843390   57466 kubeadm.go:928] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0708 20:49:18.843540   57466 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-914355 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:49:18.843649   57466 ssh_runner.go:195] Run: crio config
	I0708 20:49:18.897001   57466 cni.go:84] Creating CNI manager for ""
	I0708 20:49:18.897023   57466 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:49:18.897044   57466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:49:18.897070   57466 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-914355 NodeName:old-k8s-version-914355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0708 20:49:18.897240   57466 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-914355"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:49:18.897313   57466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0708 20:49:18.909655   57466 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:49:18.909727   57466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:49:18.920175   57466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0708 20:49:18.938160   57466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:49:18.955895   57466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0708 20:49:18.974324   57466 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0708 20:49:18.978457   57466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:49:18.991600   57466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:49:19.124941   57466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:49:19.152197   57466 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355 for IP: 192.168.50.65
	I0708 20:49:19.152239   57466 certs.go:194] generating shared ca certs ...
	I0708 20:49:19.152263   57466 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:49:19.152458   57466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:49:19.152522   57466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:49:19.152537   57466 certs.go:256] generating profile certs ...
	I0708 20:49:19.152675   57466 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.key
	I0708 20:49:19.152751   57466 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.key.8b45f3cf
	I0708 20:49:19.152822   57466 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.key
	I0708 20:49:19.152996   57466 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:49:19.153059   57466 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:49:19.153074   57466 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:49:19.153121   57466 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:49:19.153161   57466 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:49:19.153192   57466 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:49:19.153258   57466 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:49:19.153969   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:49:19.206913   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:49:19.237824   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:49:19.274173   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:49:19.316418   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0708 20:49:19.358345   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:49:19.390044   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:49:19.422744   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 20:49:19.449881   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:49:19.475133   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:49:19.501085   57466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:49:19.525845   57466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:49:19.543311   57466 ssh_runner.go:195] Run: openssl version
	I0708 20:49:19.549309   57466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:49:19.561987   57466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:49:19.567796   57466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:49:19.567856   57466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:49:19.574268   57466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:49:19.585618   57466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:49:19.597133   57466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:49:19.601588   57466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:49:19.601656   57466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:49:19.607339   57466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:49:19.618532   57466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:49:19.630030   57466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:49:19.634778   57466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:49:19.634856   57466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:49:19.640644   57466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:49:19.652745   57466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:49:19.657722   57466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:49:19.664195   57466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:49:19.670362   57466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:49:19.676750   57466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:49:19.683328   57466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:49:19.689425   57466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:49:19.695532   57466 kubeadm.go:391] StartCluster: {Name:old-k8s-version-914355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-914355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:49:19.695615   57466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:49:19.695660   57466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:49:19.736826   57466 cri.go:89] found id: ""
	I0708 20:49:19.736926   57466 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:49:19.749179   57466 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:49:19.749198   57466 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:49:19.749204   57466 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:49:19.749258   57466 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:49:19.760201   57466 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:49:19.761197   57466 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-914355" does not appear in /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:49:19.761937   57466 kubeconfig.go:62] /home/jenkins/minikube-integration/19195-5988/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-914355" cluster setting kubeconfig missing "old-k8s-version-914355" context setting]
	I0708 20:49:19.763500   57466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:49:19.765323   57466 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:49:19.776233   57466 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.65
	I0708 20:49:19.776267   57466 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:49:19.776279   57466 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:49:19.776351   57466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:49:19.815997   57466 cri.go:89] found id: ""
	I0708 20:49:19.816079   57466 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:49:19.835098   57466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:49:19.847756   57466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:49:19.847781   57466 kubeadm.go:156] found existing configuration files:
	
	I0708 20:49:19.847836   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:49:19.858197   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:49:19.858249   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:49:19.868516   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:49:19.878602   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:49:19.878675   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:49:19.890110   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:49:19.901054   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:49:19.901117   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:49:19.911572   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:49:19.921499   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:49:19.921562   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:49:19.932068   57466 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:49:19.943301   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:49:20.068373   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:49:20.722559   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:49:20.964867   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:49:21.093099   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:49:21.183374   57466 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:49:21.183496   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:21.684094   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:22.183561   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:22.684241   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:23.183629   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:23.684213   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:24.184219   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:24.683872   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:25.183588   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:25.683900   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:26.184201   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:26.684532   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:27.183974   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:27.683979   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:28.183674   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:28.684250   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:29.184597   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:29.684251   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:30.183763   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:30.684529   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:31.184276   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:31.683580   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:32.184417   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:32.684200   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:33.184413   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:33.683578   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:34.183561   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:34.683547   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:35.184573   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:35.683597   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:36.183584   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:36.683646   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:37.183601   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:37.683602   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:38.184601   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:38.683980   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:39.183609   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:39.683588   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:40.183942   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:40.684320   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:41.184026   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:41.683737   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:42.184364   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:42.683671   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:43.183751   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:43.683614   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:44.184521   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:44.683749   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:45.184307   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:45.684497   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:46.183567   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:46.683595   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:47.184591   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:47.683558   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:48.183611   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:48.684543   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:49.183770   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:49.683656   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:50.183560   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:50.683640   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:51.183652   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:51.684310   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:52.183556   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:52.683735   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:53.184121   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:53.684115   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:54.183609   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:54.683914   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:55.183549   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:55.683630   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:56.184440   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:56.684014   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:57.183989   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:57.684517   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:58.183564   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:58.683575   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:59.183614   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:49:59.683650   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:00.184472   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:00.683807   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:01.184195   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:01.684273   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:02.184355   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:02.684629   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:03.183580   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:03.684031   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:04.184274   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:04.684073   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:05.184345   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:05.683689   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:06.183564   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:06.684065   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:07.184587   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:07.684012   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:08.183610   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:08.683574   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:09.183579   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:09.683859   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:10.184569   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:10.684344   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:11.183596   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:11.683724   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:12.183585   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:12.683572   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:13.184446   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:13.684484   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:14.183583   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:14.683690   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:15.184191   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:15.683566   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:16.184224   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:16.684322   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:17.184235   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:17.684366   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:18.183967   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:18.683842   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:19.184521   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:19.683520   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:20.184596   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:20.684495   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:21.184297   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:21.184369   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:21.224944   57466 cri.go:89] found id: ""
	I0708 20:50:21.224975   57466 logs.go:276] 0 containers: []
	W0708 20:50:21.224983   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:21.224988   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:21.225045   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:21.261204   57466 cri.go:89] found id: ""
	I0708 20:50:21.261235   57466 logs.go:276] 0 containers: []
	W0708 20:50:21.261245   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:21.261251   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:21.261310   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:21.302738   57466 cri.go:89] found id: ""
	I0708 20:50:21.302764   57466 logs.go:276] 0 containers: []
	W0708 20:50:21.302772   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:21.302777   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:21.302824   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:21.339093   57466 cri.go:89] found id: ""
	I0708 20:50:21.339132   57466 logs.go:276] 0 containers: []
	W0708 20:50:21.339144   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:21.339152   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:21.339210   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:21.374655   57466 cri.go:89] found id: ""
	I0708 20:50:21.374682   57466 logs.go:276] 0 containers: []
	W0708 20:50:21.374692   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:21.374698   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:21.374754   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:21.408053   57466 cri.go:89] found id: ""
	I0708 20:50:21.408077   57466 logs.go:276] 0 containers: []
	W0708 20:50:21.408099   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:21.408106   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:21.408166   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:21.444271   57466 cri.go:89] found id: ""
	I0708 20:50:21.444294   57466 logs.go:276] 0 containers: []
	W0708 20:50:21.444303   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:21.444310   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:21.444357   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:21.482061   57466 cri.go:89] found id: ""
	I0708 20:50:21.482097   57466 logs.go:276] 0 containers: []
	W0708 20:50:21.482105   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:21.482113   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:21.482125   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:21.606768   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:21.606790   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:21.606803   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:21.673998   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:21.674035   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:21.719088   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:21.719127   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:21.781080   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:21.781107   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:24.298437   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:24.315952   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:24.316032   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:24.365650   57466 cri.go:89] found id: ""
	I0708 20:50:24.365676   57466 logs.go:276] 0 containers: []
	W0708 20:50:24.365688   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:24.365695   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:24.365750   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:24.415300   57466 cri.go:89] found id: ""
	I0708 20:50:24.415329   57466 logs.go:276] 0 containers: []
	W0708 20:50:24.415342   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:24.415350   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:24.415413   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:24.463858   57466 cri.go:89] found id: ""
	I0708 20:50:24.463884   57466 logs.go:276] 0 containers: []
	W0708 20:50:24.463902   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:24.463908   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:24.463967   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:24.515088   57466 cri.go:89] found id: ""
	I0708 20:50:24.515118   57466 logs.go:276] 0 containers: []
	W0708 20:50:24.515129   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:24.515136   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:24.515198   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:24.587604   57466 cri.go:89] found id: ""
	I0708 20:50:24.587632   57466 logs.go:276] 0 containers: []
	W0708 20:50:24.587642   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:24.587649   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:24.587709   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:24.652429   57466 cri.go:89] found id: ""
	I0708 20:50:24.652453   57466 logs.go:276] 0 containers: []
	W0708 20:50:24.652462   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:24.652469   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:24.652528   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:24.710443   57466 cri.go:89] found id: ""
	I0708 20:50:24.710469   57466 logs.go:276] 0 containers: []
	W0708 20:50:24.710480   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:24.710487   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:24.710544   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:24.754465   57466 cri.go:89] found id: ""
	I0708 20:50:24.754496   57466 logs.go:276] 0 containers: []
	W0708 20:50:24.754508   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:24.754519   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:24.754537   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:24.813919   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:24.813954   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:24.830536   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:24.830568   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:24.919768   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:24.919790   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:24.919806   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:24.996963   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:24.996995   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:27.552060   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:27.566703   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:27.566779   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:27.605283   57466 cri.go:89] found id: ""
	I0708 20:50:27.605313   57466 logs.go:276] 0 containers: []
	W0708 20:50:27.605323   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:27.605329   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:27.605377   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:27.645918   57466 cri.go:89] found id: ""
	I0708 20:50:27.645945   57466 logs.go:276] 0 containers: []
	W0708 20:50:27.645955   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:27.645963   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:27.646011   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:27.684139   57466 cri.go:89] found id: ""
	I0708 20:50:27.684165   57466 logs.go:276] 0 containers: []
	W0708 20:50:27.684176   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:27.684183   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:27.684237   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:27.736897   57466 cri.go:89] found id: ""
	I0708 20:50:27.736925   57466 logs.go:276] 0 containers: []
	W0708 20:50:27.736935   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:27.736943   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:27.737001   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:27.778380   57466 cri.go:89] found id: ""
	I0708 20:50:27.778402   57466 logs.go:276] 0 containers: []
	W0708 20:50:27.778411   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:27.778417   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:27.778462   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:27.817791   57466 cri.go:89] found id: ""
	I0708 20:50:27.817818   57466 logs.go:276] 0 containers: []
	W0708 20:50:27.817829   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:27.817837   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:27.817895   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:27.855002   57466 cri.go:89] found id: ""
	I0708 20:50:27.855022   57466 logs.go:276] 0 containers: []
	W0708 20:50:27.855031   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:27.855038   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:27.855091   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:27.899609   57466 cri.go:89] found id: ""
	I0708 20:50:27.899637   57466 logs.go:276] 0 containers: []
	W0708 20:50:27.899658   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:27.899673   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:27.899693   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:27.985608   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:27.985644   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:27.985661   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:28.059273   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:28.059311   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:28.100587   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:28.100609   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:28.154172   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:28.154208   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:30.671580   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:30.684766   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:30.684842   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:30.718588   57466 cri.go:89] found id: ""
	I0708 20:50:30.718623   57466 logs.go:276] 0 containers: []
	W0708 20:50:30.718635   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:30.718645   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:30.718734   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:30.750909   57466 cri.go:89] found id: ""
	I0708 20:50:30.750937   57466 logs.go:276] 0 containers: []
	W0708 20:50:30.750945   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:30.750950   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:30.750999   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:30.785165   57466 cri.go:89] found id: ""
	I0708 20:50:30.785191   57466 logs.go:276] 0 containers: []
	W0708 20:50:30.785198   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:30.785203   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:30.785251   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:30.821404   57466 cri.go:89] found id: ""
	I0708 20:50:30.821433   57466 logs.go:276] 0 containers: []
	W0708 20:50:30.821441   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:30.821446   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:30.821504   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:30.858195   57466 cri.go:89] found id: ""
	I0708 20:50:30.858220   57466 logs.go:276] 0 containers: []
	W0708 20:50:30.858229   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:30.858234   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:30.858344   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:30.893669   57466 cri.go:89] found id: ""
	I0708 20:50:30.893693   57466 logs.go:276] 0 containers: []
	W0708 20:50:30.893700   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:30.893706   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:30.893772   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:30.934734   57466 cri.go:89] found id: ""
	I0708 20:50:30.934764   57466 logs.go:276] 0 containers: []
	W0708 20:50:30.934775   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:30.934782   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:30.934877   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:30.971801   57466 cri.go:89] found id: ""
	I0708 20:50:30.971834   57466 logs.go:276] 0 containers: []
	W0708 20:50:30.971847   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:30.971879   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:30.971896   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:31.022412   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:31.022453   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:31.038567   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:31.038603   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:31.117635   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:31.117658   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:31.117672   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:31.194234   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:31.194272   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:33.733992   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:33.747606   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:33.747676   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:33.783672   57466 cri.go:89] found id: ""
	I0708 20:50:33.783700   57466 logs.go:276] 0 containers: []
	W0708 20:50:33.783711   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:33.783719   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:33.783782   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:33.819200   57466 cri.go:89] found id: ""
	I0708 20:50:33.819227   57466 logs.go:276] 0 containers: []
	W0708 20:50:33.819236   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:33.819241   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:33.819291   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:33.854071   57466 cri.go:89] found id: ""
	I0708 20:50:33.854099   57466 logs.go:276] 0 containers: []
	W0708 20:50:33.854110   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:33.854117   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:33.854187   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:33.892115   57466 cri.go:89] found id: ""
	I0708 20:50:33.892158   57466 logs.go:276] 0 containers: []
	W0708 20:50:33.892169   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:33.892176   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:33.892238   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:33.926824   57466 cri.go:89] found id: ""
	I0708 20:50:33.926854   57466 logs.go:276] 0 containers: []
	W0708 20:50:33.926863   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:33.926870   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:33.926935   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:33.961473   57466 cri.go:89] found id: ""
	I0708 20:50:33.961501   57466 logs.go:276] 0 containers: []
	W0708 20:50:33.961510   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:33.961517   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:33.961575   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:34.000369   57466 cri.go:89] found id: ""
	I0708 20:50:34.000399   57466 logs.go:276] 0 containers: []
	W0708 20:50:34.000410   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:34.000418   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:34.000486   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:34.042845   57466 cri.go:89] found id: ""
	I0708 20:50:34.042877   57466 logs.go:276] 0 containers: []
	W0708 20:50:34.042889   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:34.042900   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:34.042915   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:34.105789   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:34.105823   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:34.123856   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:34.123882   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:34.201779   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:34.201809   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:34.201823   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:34.296145   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:34.296183   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:36.838335   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:36.852009   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:36.852086   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:36.891814   57466 cri.go:89] found id: ""
	I0708 20:50:36.891837   57466 logs.go:276] 0 containers: []
	W0708 20:50:36.891847   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:36.891854   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:36.891913   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:36.931547   57466 cri.go:89] found id: ""
	I0708 20:50:36.931575   57466 logs.go:276] 0 containers: []
	W0708 20:50:36.931587   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:36.931594   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:36.931643   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:36.973722   57466 cri.go:89] found id: ""
	I0708 20:50:36.973749   57466 logs.go:276] 0 containers: []
	W0708 20:50:36.973761   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:36.973768   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:36.973815   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:37.018434   57466 cri.go:89] found id: ""
	I0708 20:50:37.018460   57466 logs.go:276] 0 containers: []
	W0708 20:50:37.018471   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:37.018480   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:37.018528   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:37.059277   57466 cri.go:89] found id: ""
	I0708 20:50:37.059305   57466 logs.go:276] 0 containers: []
	W0708 20:50:37.059317   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:37.059324   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:37.059387   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:37.095028   57466 cri.go:89] found id: ""
	I0708 20:50:37.095053   57466 logs.go:276] 0 containers: []
	W0708 20:50:37.095061   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:37.095066   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:37.095116   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:37.129528   57466 cri.go:89] found id: ""
	I0708 20:50:37.129560   57466 logs.go:276] 0 containers: []
	W0708 20:50:37.129571   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:37.129579   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:37.129635   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:37.165527   57466 cri.go:89] found id: ""
	I0708 20:50:37.165552   57466 logs.go:276] 0 containers: []
	W0708 20:50:37.165559   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:37.165568   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:37.165585   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:37.180177   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:37.180217   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:37.257301   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:37.257320   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:37.257331   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:37.330734   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:37.330770   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:37.368984   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:37.369014   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:39.918771   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:39.932284   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:39.932356   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:39.968682   57466 cri.go:89] found id: ""
	I0708 20:50:39.968714   57466 logs.go:276] 0 containers: []
	W0708 20:50:39.968725   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:39.968732   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:39.968786   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:40.001482   57466 cri.go:89] found id: ""
	I0708 20:50:40.001510   57466 logs.go:276] 0 containers: []
	W0708 20:50:40.001519   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:40.001526   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:40.001589   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:40.042154   57466 cri.go:89] found id: ""
	I0708 20:50:40.042173   57466 logs.go:276] 0 containers: []
	W0708 20:50:40.042189   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:40.042196   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:40.042252   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:40.079975   57466 cri.go:89] found id: ""
	I0708 20:50:40.079998   57466 logs.go:276] 0 containers: []
	W0708 20:50:40.080014   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:40.080019   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:40.080075   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:40.115650   57466 cri.go:89] found id: ""
	I0708 20:50:40.115678   57466 logs.go:276] 0 containers: []
	W0708 20:50:40.115688   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:40.115695   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:40.115746   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:40.160421   57466 cri.go:89] found id: ""
	I0708 20:50:40.160451   57466 logs.go:276] 0 containers: []
	W0708 20:50:40.160463   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:40.160471   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:40.160523   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:40.204968   57466 cri.go:89] found id: ""
	I0708 20:50:40.204995   57466 logs.go:276] 0 containers: []
	W0708 20:50:40.205005   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:40.205012   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:40.205074   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:40.248586   57466 cri.go:89] found id: ""
	I0708 20:50:40.248613   57466 logs.go:276] 0 containers: []
	W0708 20:50:40.248623   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:40.248634   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:40.248648   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:40.299650   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:40.299679   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:40.316517   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:40.316539   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:40.386528   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:40.386559   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:40.386571   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:40.468472   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:40.468502   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:43.008347   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:43.022069   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:43.022139   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:43.056405   57466 cri.go:89] found id: ""
	I0708 20:50:43.056432   57466 logs.go:276] 0 containers: []
	W0708 20:50:43.056442   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:43.056447   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:43.056495   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:43.099114   57466 cri.go:89] found id: ""
	I0708 20:50:43.099133   57466 logs.go:276] 0 containers: []
	W0708 20:50:43.099140   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:43.099151   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:43.099199   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:43.132385   57466 cri.go:89] found id: ""
	I0708 20:50:43.132415   57466 logs.go:276] 0 containers: []
	W0708 20:50:43.132422   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:43.132427   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:43.132476   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:43.167727   57466 cri.go:89] found id: ""
	I0708 20:50:43.167747   57466 logs.go:276] 0 containers: []
	W0708 20:50:43.167754   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:43.167759   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:43.167805   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:43.205506   57466 cri.go:89] found id: ""
	I0708 20:50:43.205534   57466 logs.go:276] 0 containers: []
	W0708 20:50:43.205544   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:43.205549   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:43.205606   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:43.241909   57466 cri.go:89] found id: ""
	I0708 20:50:43.241932   57466 logs.go:276] 0 containers: []
	W0708 20:50:43.241939   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:43.241945   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:43.241993   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:43.279298   57466 cri.go:89] found id: ""
	I0708 20:50:43.279328   57466 logs.go:276] 0 containers: []
	W0708 20:50:43.279337   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:43.279343   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:43.279390   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:43.314518   57466 cri.go:89] found id: ""
	I0708 20:50:43.314547   57466 logs.go:276] 0 containers: []
	W0708 20:50:43.314557   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:43.314570   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:43.314586   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:43.375298   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:43.375331   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:43.390584   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:43.390614   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:43.467561   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:43.467580   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:43.467593   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:43.540638   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:43.540674   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:46.079167   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:46.093076   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:46.093133   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:46.127951   57466 cri.go:89] found id: ""
	I0708 20:50:46.127973   57466 logs.go:276] 0 containers: []
	W0708 20:50:46.127981   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:46.127986   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:46.128032   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:46.176597   57466 cri.go:89] found id: ""
	I0708 20:50:46.176633   57466 logs.go:276] 0 containers: []
	W0708 20:50:46.176645   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:46.176653   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:46.176714   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:46.214658   57466 cri.go:89] found id: ""
	I0708 20:50:46.214688   57466 logs.go:276] 0 containers: []
	W0708 20:50:46.214698   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:46.214706   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:46.214769   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:46.252642   57466 cri.go:89] found id: ""
	I0708 20:50:46.252668   57466 logs.go:276] 0 containers: []
	W0708 20:50:46.252678   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:46.252685   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:46.252744   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:46.288329   57466 cri.go:89] found id: ""
	I0708 20:50:46.288358   57466 logs.go:276] 0 containers: []
	W0708 20:50:46.288369   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:46.288376   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:46.288448   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:46.325362   57466 cri.go:89] found id: ""
	I0708 20:50:46.325391   57466 logs.go:276] 0 containers: []
	W0708 20:50:46.325400   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:46.325406   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:46.325464   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:46.362369   57466 cri.go:89] found id: ""
	I0708 20:50:46.362396   57466 logs.go:276] 0 containers: []
	W0708 20:50:46.362404   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:46.362409   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:46.362455   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:46.397072   57466 cri.go:89] found id: ""
	I0708 20:50:46.397103   57466 logs.go:276] 0 containers: []
	W0708 20:50:46.397119   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:46.397130   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:46.397144   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:46.480337   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:46.480411   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:46.524916   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:46.524937   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:46.578581   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:46.578622   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:46.594728   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:46.594753   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:46.667244   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:49.168446   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:49.184089   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:49.184152   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:49.224654   57466 cri.go:89] found id: ""
	I0708 20:50:49.224688   57466 logs.go:276] 0 containers: []
	W0708 20:50:49.224698   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:49.224706   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:49.224787   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:49.265993   57466 cri.go:89] found id: ""
	I0708 20:50:49.266019   57466 logs.go:276] 0 containers: []
	W0708 20:50:49.266027   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:49.266032   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:49.266081   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:49.306974   57466 cri.go:89] found id: ""
	I0708 20:50:49.307002   57466 logs.go:276] 0 containers: []
	W0708 20:50:49.307013   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:49.307020   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:49.307080   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:49.346842   57466 cri.go:89] found id: ""
	I0708 20:50:49.346867   57466 logs.go:276] 0 containers: []
	W0708 20:50:49.346877   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:49.346883   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:49.346944   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:49.381919   57466 cri.go:89] found id: ""
	I0708 20:50:49.381946   57466 logs.go:276] 0 containers: []
	W0708 20:50:49.381956   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:49.381963   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:49.382027   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:49.415722   57466 cri.go:89] found id: ""
	I0708 20:50:49.415749   57466 logs.go:276] 0 containers: []
	W0708 20:50:49.415760   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:49.415766   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:49.415825   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:49.458057   57466 cri.go:89] found id: ""
	I0708 20:50:49.458085   57466 logs.go:276] 0 containers: []
	W0708 20:50:49.458094   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:49.458099   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:49.458153   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:49.495001   57466 cri.go:89] found id: ""
	I0708 20:50:49.495034   57466 logs.go:276] 0 containers: []
	W0708 20:50:49.495045   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:49.495057   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:49.495069   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:49.545335   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:49.545367   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:49.558849   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:49.558873   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:49.633726   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:49.633756   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:49.633773   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:49.709433   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:49.709467   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:52.249683   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:52.263704   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:52.263777   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:52.301815   57466 cri.go:89] found id: ""
	I0708 20:50:52.301847   57466 logs.go:276] 0 containers: []
	W0708 20:50:52.301855   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:52.301862   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:52.301924   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:52.339027   57466 cri.go:89] found id: ""
	I0708 20:50:52.339050   57466 logs.go:276] 0 containers: []
	W0708 20:50:52.339060   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:52.339067   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:52.339121   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:52.375576   57466 cri.go:89] found id: ""
	I0708 20:50:52.375597   57466 logs.go:276] 0 containers: []
	W0708 20:50:52.375606   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:52.375614   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:52.375675   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:52.413736   57466 cri.go:89] found id: ""
	I0708 20:50:52.413763   57466 logs.go:276] 0 containers: []
	W0708 20:50:52.413774   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:52.413783   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:52.413846   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:52.453325   57466 cri.go:89] found id: ""
	I0708 20:50:52.453364   57466 logs.go:276] 0 containers: []
	W0708 20:50:52.453379   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:52.453386   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:52.453448   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:52.488724   57466 cri.go:89] found id: ""
	I0708 20:50:52.488754   57466 logs.go:276] 0 containers: []
	W0708 20:50:52.488766   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:52.488773   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:52.488841   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:52.529905   57466 cri.go:89] found id: ""
	I0708 20:50:52.529931   57466 logs.go:276] 0 containers: []
	W0708 20:50:52.529941   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:52.529948   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:52.530009   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:52.571421   57466 cri.go:89] found id: ""
	I0708 20:50:52.571457   57466 logs.go:276] 0 containers: []
	W0708 20:50:52.571468   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:52.571479   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:52.571494   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:52.649769   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:52.649801   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:52.696850   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:52.696882   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:52.751037   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:52.751075   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:52.764838   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:52.764865   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:52.834904   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:55.335557   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:55.349474   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:55.349545   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:55.388341   57466 cri.go:89] found id: ""
	I0708 20:50:55.388372   57466 logs.go:276] 0 containers: []
	W0708 20:50:55.388380   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:55.388385   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:55.388434   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:55.426196   57466 cri.go:89] found id: ""
	I0708 20:50:55.426220   57466 logs.go:276] 0 containers: []
	W0708 20:50:55.426229   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:55.426234   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:55.426278   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:55.463485   57466 cri.go:89] found id: ""
	I0708 20:50:55.463511   57466 logs.go:276] 0 containers: []
	W0708 20:50:55.463520   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:55.463524   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:55.463569   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:55.498778   57466 cri.go:89] found id: ""
	I0708 20:50:55.498801   57466 logs.go:276] 0 containers: []
	W0708 20:50:55.498808   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:55.498814   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:55.498873   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:55.534510   57466 cri.go:89] found id: ""
	I0708 20:50:55.534541   57466 logs.go:276] 0 containers: []
	W0708 20:50:55.534550   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:55.534555   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:55.534604   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:55.579482   57466 cri.go:89] found id: ""
	I0708 20:50:55.579506   57466 logs.go:276] 0 containers: []
	W0708 20:50:55.579514   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:55.579523   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:55.579579   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:55.615482   57466 cri.go:89] found id: ""
	I0708 20:50:55.615512   57466 logs.go:276] 0 containers: []
	W0708 20:50:55.615520   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:55.615526   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:55.615576   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:55.649784   57466 cri.go:89] found id: ""
	I0708 20:50:55.649813   57466 logs.go:276] 0 containers: []
	W0708 20:50:55.649822   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:55.649830   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:55.649841   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:55.700761   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:55.700804   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:55.714688   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:55.714723   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:55.784750   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:55.784769   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:55.784780   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:55.859838   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:55.859872   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:50:58.397964   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:50:58.412028   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:50:58.412100   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:50:58.450405   57466 cri.go:89] found id: ""
	I0708 20:50:58.450432   57466 logs.go:276] 0 containers: []
	W0708 20:50:58.450440   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:50:58.450445   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:50:58.450505   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:50:58.488631   57466 cri.go:89] found id: ""
	I0708 20:50:58.488661   57466 logs.go:276] 0 containers: []
	W0708 20:50:58.488670   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:50:58.488675   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:50:58.488723   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:50:58.530782   57466 cri.go:89] found id: ""
	I0708 20:50:58.530814   57466 logs.go:276] 0 containers: []
	W0708 20:50:58.530824   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:50:58.530831   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:50:58.530879   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:50:58.566147   57466 cri.go:89] found id: ""
	I0708 20:50:58.566179   57466 logs.go:276] 0 containers: []
	W0708 20:50:58.566189   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:50:58.566196   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:50:58.566256   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:50:58.601576   57466 cri.go:89] found id: ""
	I0708 20:50:58.601600   57466 logs.go:276] 0 containers: []
	W0708 20:50:58.601610   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:50:58.601616   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:50:58.601675   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:50:58.651739   57466 cri.go:89] found id: ""
	I0708 20:50:58.651763   57466 logs.go:276] 0 containers: []
	W0708 20:50:58.651772   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:50:58.651787   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:50:58.651847   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:50:58.688138   57466 cri.go:89] found id: ""
	I0708 20:50:58.688165   57466 logs.go:276] 0 containers: []
	W0708 20:50:58.688173   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:50:58.688183   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:50:58.688236   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:50:58.723612   57466 cri.go:89] found id: ""
	I0708 20:50:58.723638   57466 logs.go:276] 0 containers: []
	W0708 20:50:58.723646   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:50:58.723657   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:50:58.723674   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:50:58.774541   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:50:58.774571   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:50:58.788794   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:50:58.788819   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:50:58.862018   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:50:58.862041   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:50:58.862053   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:50:58.945751   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:50:58.945783   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:01.483216   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:01.497526   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:01.497594   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:01.534681   57466 cri.go:89] found id: ""
	I0708 20:51:01.534708   57466 logs.go:276] 0 containers: []
	W0708 20:51:01.534715   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:01.534722   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:01.534778   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:01.569607   57466 cri.go:89] found id: ""
	I0708 20:51:01.569638   57466 logs.go:276] 0 containers: []
	W0708 20:51:01.569650   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:01.569657   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:01.569719   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:01.607381   57466 cri.go:89] found id: ""
	I0708 20:51:01.607413   57466 logs.go:276] 0 containers: []
	W0708 20:51:01.607424   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:01.607431   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:01.607512   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:01.645557   57466 cri.go:89] found id: ""
	I0708 20:51:01.645583   57466 logs.go:276] 0 containers: []
	W0708 20:51:01.645593   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:01.645601   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:01.645665   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:01.680713   57466 cri.go:89] found id: ""
	I0708 20:51:01.680742   57466 logs.go:276] 0 containers: []
	W0708 20:51:01.680751   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:01.680758   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:01.680822   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:01.721727   57466 cri.go:89] found id: ""
	I0708 20:51:01.721751   57466 logs.go:276] 0 containers: []
	W0708 20:51:01.721761   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:01.721768   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:01.721830   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:01.756571   57466 cri.go:89] found id: ""
	I0708 20:51:01.756601   57466 logs.go:276] 0 containers: []
	W0708 20:51:01.756612   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:01.756619   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:01.756671   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:01.799333   57466 cri.go:89] found id: ""
	I0708 20:51:01.799355   57466 logs.go:276] 0 containers: []
	W0708 20:51:01.799363   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:01.799373   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:01.799388   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:01.849341   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:01.849374   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:01.863768   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:01.863792   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:01.939494   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:01.939520   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:01.939535   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:02.012778   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:02.012812   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:04.555579   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:04.570943   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:04.571001   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:04.608966   57466 cri.go:89] found id: ""
	I0708 20:51:04.608995   57466 logs.go:276] 0 containers: []
	W0708 20:51:04.609005   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:04.609012   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:04.609073   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:04.645783   57466 cri.go:89] found id: ""
	I0708 20:51:04.645805   57466 logs.go:276] 0 containers: []
	W0708 20:51:04.645815   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:04.645821   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:04.645878   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:04.683730   57466 cri.go:89] found id: ""
	I0708 20:51:04.683749   57466 logs.go:276] 0 containers: []
	W0708 20:51:04.683757   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:04.683762   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:04.683807   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:04.722799   57466 cri.go:89] found id: ""
	I0708 20:51:04.722825   57466 logs.go:276] 0 containers: []
	W0708 20:51:04.722835   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:04.722843   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:04.722919   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:04.760985   57466 cri.go:89] found id: ""
	I0708 20:51:04.761014   57466 logs.go:276] 0 containers: []
	W0708 20:51:04.761024   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:04.761032   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:04.761089   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:04.803970   57466 cri.go:89] found id: ""
	I0708 20:51:04.803994   57466 logs.go:276] 0 containers: []
	W0708 20:51:04.804001   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:04.804006   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:04.804054   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:04.840370   57466 cri.go:89] found id: ""
	I0708 20:51:04.840399   57466 logs.go:276] 0 containers: []
	W0708 20:51:04.840410   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:04.840417   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:04.840510   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:04.876817   57466 cri.go:89] found id: ""
	I0708 20:51:04.876848   57466 logs.go:276] 0 containers: []
	W0708 20:51:04.876860   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:04.876872   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:04.876886   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:04.934566   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:04.934594   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:04.948666   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:04.948700   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:05.039824   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:05.039848   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:05.039862   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:05.133078   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:05.133112   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:07.682610   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:07.696091   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:07.696149   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:07.735868   57466 cri.go:89] found id: ""
	I0708 20:51:07.735894   57466 logs.go:276] 0 containers: []
	W0708 20:51:07.735904   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:07.735909   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:07.735968   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:07.776232   57466 cri.go:89] found id: ""
	I0708 20:51:07.776265   57466 logs.go:276] 0 containers: []
	W0708 20:51:07.776276   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:07.776282   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:07.776341   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:07.827023   57466 cri.go:89] found id: ""
	I0708 20:51:07.827055   57466 logs.go:276] 0 containers: []
	W0708 20:51:07.827066   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:07.827073   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:07.827140   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:07.862677   57466 cri.go:89] found id: ""
	I0708 20:51:07.862708   57466 logs.go:276] 0 containers: []
	W0708 20:51:07.862721   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:07.862730   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:07.862795   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:07.900546   57466 cri.go:89] found id: ""
	I0708 20:51:07.900575   57466 logs.go:276] 0 containers: []
	W0708 20:51:07.900583   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:07.900589   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:07.900636   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:07.939161   57466 cri.go:89] found id: ""
	I0708 20:51:07.939187   57466 logs.go:276] 0 containers: []
	W0708 20:51:07.939194   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:07.939203   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:07.939252   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:07.973649   57466 cri.go:89] found id: ""
	I0708 20:51:07.973678   57466 logs.go:276] 0 containers: []
	W0708 20:51:07.973688   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:07.973695   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:07.973760   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:08.008482   57466 cri.go:89] found id: ""
	I0708 20:51:08.008510   57466 logs.go:276] 0 containers: []
	W0708 20:51:08.008517   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:08.008526   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:08.008538   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:08.080250   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:08.080279   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:08.080295   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:08.154849   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:08.154881   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:08.207254   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:08.207279   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:08.257374   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:08.257418   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:10.773477   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:10.795531   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:10.795598   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:10.849107   57466 cri.go:89] found id: ""
	I0708 20:51:10.849133   57466 logs.go:276] 0 containers: []
	W0708 20:51:10.849143   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:10.849150   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:10.849216   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:10.920849   57466 cri.go:89] found id: ""
	I0708 20:51:10.920875   57466 logs.go:276] 0 containers: []
	W0708 20:51:10.920883   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:10.920890   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:10.920939   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:10.958248   57466 cri.go:89] found id: ""
	I0708 20:51:10.958277   57466 logs.go:276] 0 containers: []
	W0708 20:51:10.958288   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:10.958294   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:10.958376   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:10.994713   57466 cri.go:89] found id: ""
	I0708 20:51:10.994741   57466 logs.go:276] 0 containers: []
	W0708 20:51:10.994749   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:10.994755   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:10.994801   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:11.028343   57466 cri.go:89] found id: ""
	I0708 20:51:11.028368   57466 logs.go:276] 0 containers: []
	W0708 20:51:11.028378   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:11.028383   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:11.028439   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:11.067666   57466 cri.go:89] found id: ""
	I0708 20:51:11.067696   57466 logs.go:276] 0 containers: []
	W0708 20:51:11.067704   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:11.067709   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:11.067755   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:11.101184   57466 cri.go:89] found id: ""
	I0708 20:51:11.101210   57466 logs.go:276] 0 containers: []
	W0708 20:51:11.101219   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:11.101225   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:11.101271   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:11.136850   57466 cri.go:89] found id: ""
	I0708 20:51:11.136876   57466 logs.go:276] 0 containers: []
	W0708 20:51:11.136884   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:11.136892   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:11.136912   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:11.187255   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:11.187290   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:11.202124   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:11.202156   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:11.277776   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:11.277797   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:11.277812   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:11.356242   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:11.356277   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:13.897795   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:13.910636   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:13.910700   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:13.953811   57466 cri.go:89] found id: ""
	I0708 20:51:13.953840   57466 logs.go:276] 0 containers: []
	W0708 20:51:13.953849   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:13.953854   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:13.953911   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:13.989028   57466 cri.go:89] found id: ""
	I0708 20:51:13.989063   57466 logs.go:276] 0 containers: []
	W0708 20:51:13.989075   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:13.989082   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:13.989150   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:14.022170   57466 cri.go:89] found id: ""
	I0708 20:51:14.022198   57466 logs.go:276] 0 containers: []
	W0708 20:51:14.022208   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:14.022215   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:14.022278   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:14.056031   57466 cri.go:89] found id: ""
	I0708 20:51:14.056060   57466 logs.go:276] 0 containers: []
	W0708 20:51:14.056070   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:14.056078   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:14.056142   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:14.091074   57466 cri.go:89] found id: ""
	I0708 20:51:14.091099   57466 logs.go:276] 0 containers: []
	W0708 20:51:14.091107   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:14.091112   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:14.091160   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:14.126350   57466 cri.go:89] found id: ""
	I0708 20:51:14.126383   57466 logs.go:276] 0 containers: []
	W0708 20:51:14.126394   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:14.126402   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:14.126457   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:14.166130   57466 cri.go:89] found id: ""
	I0708 20:51:14.166150   57466 logs.go:276] 0 containers: []
	W0708 20:51:14.166157   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:14.166163   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:14.166208   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:14.205652   57466 cri.go:89] found id: ""
	I0708 20:51:14.205674   57466 logs.go:276] 0 containers: []
	W0708 20:51:14.205683   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:14.205693   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:14.205708   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:14.259757   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:14.259792   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:14.273979   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:14.274009   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:14.341826   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:14.341846   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:14.341858   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:14.418282   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:14.418315   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:16.961698   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:16.974526   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:16.974602   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:17.009139   57466 cri.go:89] found id: ""
	I0708 20:51:17.009165   57466 logs.go:276] 0 containers: []
	W0708 20:51:17.009174   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:17.009181   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:17.009229   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:17.044852   57466 cri.go:89] found id: ""
	I0708 20:51:17.044875   57466 logs.go:276] 0 containers: []
	W0708 20:51:17.044882   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:17.044887   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:17.044936   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:17.078560   57466 cri.go:89] found id: ""
	I0708 20:51:17.078588   57466 logs.go:276] 0 containers: []
	W0708 20:51:17.078596   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:17.078602   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:17.078656   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:17.113977   57466 cri.go:89] found id: ""
	I0708 20:51:17.114006   57466 logs.go:276] 0 containers: []
	W0708 20:51:17.114014   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:17.114019   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:17.114068   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:17.147838   57466 cri.go:89] found id: ""
	I0708 20:51:17.147865   57466 logs.go:276] 0 containers: []
	W0708 20:51:17.147873   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:17.147879   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:17.147923   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:17.182804   57466 cri.go:89] found id: ""
	I0708 20:51:17.182826   57466 logs.go:276] 0 containers: []
	W0708 20:51:17.182833   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:17.182840   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:17.182894   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:17.216884   57466 cri.go:89] found id: ""
	I0708 20:51:17.216918   57466 logs.go:276] 0 containers: []
	W0708 20:51:17.216932   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:17.216940   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:17.216997   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:17.252278   57466 cri.go:89] found id: ""
	I0708 20:51:17.252305   57466 logs.go:276] 0 containers: []
	W0708 20:51:17.252314   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:17.252326   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:17.252341   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:17.322726   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:17.322747   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:17.322759   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:17.398825   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:17.398866   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:17.438232   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:17.438266   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:17.488499   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:17.488531   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:20.002399   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:20.015680   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:20.015741   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:20.050383   57466 cri.go:89] found id: ""
	I0708 20:51:20.050411   57466 logs.go:276] 0 containers: []
	W0708 20:51:20.050423   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:20.050430   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:20.050490   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:20.085363   57466 cri.go:89] found id: ""
	I0708 20:51:20.085396   57466 logs.go:276] 0 containers: []
	W0708 20:51:20.085407   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:20.085415   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:20.085478   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:20.121105   57466 cri.go:89] found id: ""
	I0708 20:51:20.121136   57466 logs.go:276] 0 containers: []
	W0708 20:51:20.121144   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:20.121150   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:20.121205   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:20.156840   57466 cri.go:89] found id: ""
	I0708 20:51:20.156869   57466 logs.go:276] 0 containers: []
	W0708 20:51:20.156877   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:20.156883   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:20.156944   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:20.191872   57466 cri.go:89] found id: ""
	I0708 20:51:20.191896   57466 logs.go:276] 0 containers: []
	W0708 20:51:20.191903   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:20.191914   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:20.191965   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:20.226552   57466 cri.go:89] found id: ""
	I0708 20:51:20.226580   57466 logs.go:276] 0 containers: []
	W0708 20:51:20.226588   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:20.226593   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:20.226642   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:20.261434   57466 cri.go:89] found id: ""
	I0708 20:51:20.261461   57466 logs.go:276] 0 containers: []
	W0708 20:51:20.261469   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:20.261476   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:20.261537   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:20.296297   57466 cri.go:89] found id: ""
	I0708 20:51:20.296325   57466 logs.go:276] 0 containers: []
	W0708 20:51:20.296335   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:20.296347   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:20.296362   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:20.309494   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:20.309523   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:20.376032   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:20.376058   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:20.376073   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:20.455478   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:20.455512   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:20.493789   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:20.493821   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:23.045747   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:23.058426   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:23.058492   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:23.091955   57466 cri.go:89] found id: ""
	I0708 20:51:23.091986   57466 logs.go:276] 0 containers: []
	W0708 20:51:23.091997   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:23.092004   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:23.092060   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:23.126240   57466 cri.go:89] found id: ""
	I0708 20:51:23.126268   57466 logs.go:276] 0 containers: []
	W0708 20:51:23.126276   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:23.126281   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:23.126338   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:23.160127   57466 cri.go:89] found id: ""
	I0708 20:51:23.160156   57466 logs.go:276] 0 containers: []
	W0708 20:51:23.160166   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:23.160177   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:23.160237   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:23.194708   57466 cri.go:89] found id: ""
	I0708 20:51:23.194741   57466 logs.go:276] 0 containers: []
	W0708 20:51:23.194752   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:23.194759   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:23.194817   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:23.228721   57466 cri.go:89] found id: ""
	I0708 20:51:23.228749   57466 logs.go:276] 0 containers: []
	W0708 20:51:23.228758   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:23.228764   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:23.228825   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:23.264989   57466 cri.go:89] found id: ""
	I0708 20:51:23.265012   57466 logs.go:276] 0 containers: []
	W0708 20:51:23.265020   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:23.265025   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:23.265085   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:23.302004   57466 cri.go:89] found id: ""
	I0708 20:51:23.302029   57466 logs.go:276] 0 containers: []
	W0708 20:51:23.302042   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:23.302047   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:23.302103   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:23.336391   57466 cri.go:89] found id: ""
	I0708 20:51:23.336417   57466 logs.go:276] 0 containers: []
	W0708 20:51:23.336425   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:23.336435   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:23.336449   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:23.384173   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:23.384205   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:23.398044   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:23.398075   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:23.468356   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:23.468383   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:23.468400   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:23.542668   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:23.542705   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:26.082746   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:26.097717   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:26.097793   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:26.137045   57466 cri.go:89] found id: ""
	I0708 20:51:26.137074   57466 logs.go:276] 0 containers: []
	W0708 20:51:26.137083   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:26.137090   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:26.137153   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:26.178631   57466 cri.go:89] found id: ""
	I0708 20:51:26.178658   57466 logs.go:276] 0 containers: []
	W0708 20:51:26.178666   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:26.178671   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:26.178722   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:26.213288   57466 cri.go:89] found id: ""
	I0708 20:51:26.213315   57466 logs.go:276] 0 containers: []
	W0708 20:51:26.213324   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:26.213331   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:26.213393   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:26.250909   57466 cri.go:89] found id: ""
	I0708 20:51:26.250934   57466 logs.go:276] 0 containers: []
	W0708 20:51:26.250944   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:26.250951   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:26.251012   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:26.293653   57466 cri.go:89] found id: ""
	I0708 20:51:26.293683   57466 logs.go:276] 0 containers: []
	W0708 20:51:26.293693   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:26.293699   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:26.293763   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:26.328095   57466 cri.go:89] found id: ""
	I0708 20:51:26.328123   57466 logs.go:276] 0 containers: []
	W0708 20:51:26.328133   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:26.328140   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:26.328200   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:26.368173   57466 cri.go:89] found id: ""
	I0708 20:51:26.368195   57466 logs.go:276] 0 containers: []
	W0708 20:51:26.368203   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:26.368208   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:26.368254   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:26.405734   57466 cri.go:89] found id: ""
	I0708 20:51:26.405755   57466 logs.go:276] 0 containers: []
	W0708 20:51:26.405769   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:26.405777   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:26.405789   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:26.457653   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:26.457689   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:26.471753   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:26.471783   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:26.550216   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:26.550238   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:26.550252   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:26.628455   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:26.628486   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:29.174481   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:29.188376   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:29.188430   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:29.222310   57466 cri.go:89] found id: ""
	I0708 20:51:29.222335   57466 logs.go:276] 0 containers: []
	W0708 20:51:29.222342   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:29.222347   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:29.222403   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:29.256290   57466 cri.go:89] found id: ""
	I0708 20:51:29.256325   57466 logs.go:276] 0 containers: []
	W0708 20:51:29.256335   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:29.256343   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:29.256401   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:29.292868   57466 cri.go:89] found id: ""
	I0708 20:51:29.292899   57466 logs.go:276] 0 containers: []
	W0708 20:51:29.292916   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:29.292924   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:29.292984   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:29.331234   57466 cri.go:89] found id: ""
	I0708 20:51:29.331264   57466 logs.go:276] 0 containers: []
	W0708 20:51:29.331273   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:29.331279   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:29.331328   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:29.370331   57466 cri.go:89] found id: ""
	I0708 20:51:29.370356   57466 logs.go:276] 0 containers: []
	W0708 20:51:29.370363   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:29.370369   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:29.370436   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:29.405210   57466 cri.go:89] found id: ""
	I0708 20:51:29.405236   57466 logs.go:276] 0 containers: []
	W0708 20:51:29.405243   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:29.405249   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:29.405307   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:29.439665   57466 cri.go:89] found id: ""
	I0708 20:51:29.439691   57466 logs.go:276] 0 containers: []
	W0708 20:51:29.439698   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:29.439703   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:29.439752   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:29.473425   57466 cri.go:89] found id: ""
	I0708 20:51:29.473464   57466 logs.go:276] 0 containers: []
	W0708 20:51:29.473477   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:29.473488   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:29.473503   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:29.512151   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:29.512192   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:29.563935   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:29.563970   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:29.577574   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:29.577602   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:29.651863   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:29.651886   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:29.651900   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:32.225078   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:32.238741   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:32.238820   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:32.276333   57466 cri.go:89] found id: ""
	I0708 20:51:32.276360   57466 logs.go:276] 0 containers: []
	W0708 20:51:32.276368   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:32.276373   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:32.276421   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:32.310491   57466 cri.go:89] found id: ""
	I0708 20:51:32.310513   57466 logs.go:276] 0 containers: []
	W0708 20:51:32.310522   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:32.310528   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:32.310589   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:32.344105   57466 cri.go:89] found id: ""
	I0708 20:51:32.344130   57466 logs.go:276] 0 containers: []
	W0708 20:51:32.344140   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:32.344147   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:32.344210   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:32.377954   57466 cri.go:89] found id: ""
	I0708 20:51:32.377980   57466 logs.go:276] 0 containers: []
	W0708 20:51:32.377991   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:32.377998   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:32.378059   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:32.411980   57466 cri.go:89] found id: ""
	I0708 20:51:32.412002   57466 logs.go:276] 0 containers: []
	W0708 20:51:32.412012   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:32.412019   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:32.412077   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:32.454586   57466 cri.go:89] found id: ""
	I0708 20:51:32.454616   57466 logs.go:276] 0 containers: []
	W0708 20:51:32.454628   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:32.454636   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:32.454695   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:32.489142   57466 cri.go:89] found id: ""
	I0708 20:51:32.489168   57466 logs.go:276] 0 containers: []
	W0708 20:51:32.489177   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:32.489185   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:32.489252   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:32.524806   57466 cri.go:89] found id: ""
	I0708 20:51:32.524851   57466 logs.go:276] 0 containers: []
	W0708 20:51:32.524867   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:32.524877   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:32.524890   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:32.575964   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:32.576006   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:32.589704   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:32.589736   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:32.660794   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:32.660827   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:32.660855   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:32.740241   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:32.740270   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:35.286855   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:35.300187   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:35.300259   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:35.334518   57466 cri.go:89] found id: ""
	I0708 20:51:35.334545   57466 logs.go:276] 0 containers: []
	W0708 20:51:35.334556   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:35.334564   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:35.334622   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:35.370824   57466 cri.go:89] found id: ""
	I0708 20:51:35.370857   57466 logs.go:276] 0 containers: []
	W0708 20:51:35.370868   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:35.370875   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:35.370933   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:35.405317   57466 cri.go:89] found id: ""
	I0708 20:51:35.405348   57466 logs.go:276] 0 containers: []
	W0708 20:51:35.405357   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:35.405364   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:35.405428   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:35.438207   57466 cri.go:89] found id: ""
	I0708 20:51:35.438233   57466 logs.go:276] 0 containers: []
	W0708 20:51:35.438241   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:35.438246   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:35.438293   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:35.472323   57466 cri.go:89] found id: ""
	I0708 20:51:35.472353   57466 logs.go:276] 0 containers: []
	W0708 20:51:35.472361   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:35.472367   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:35.472427   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:35.506617   57466 cri.go:89] found id: ""
	I0708 20:51:35.506645   57466 logs.go:276] 0 containers: []
	W0708 20:51:35.506655   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:35.506662   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:35.506723   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:35.540204   57466 cri.go:89] found id: ""
	I0708 20:51:35.540241   57466 logs.go:276] 0 containers: []
	W0708 20:51:35.540252   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:35.540260   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:35.540323   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:35.574591   57466 cri.go:89] found id: ""
	I0708 20:51:35.574613   57466 logs.go:276] 0 containers: []
	W0708 20:51:35.574620   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:35.574630   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:35.574649   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:35.653766   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:35.653803   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:35.694667   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:35.694692   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:35.745277   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:35.745319   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:35.759550   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:35.759584   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:35.828461   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:38.328882   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:38.341527   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:38.341585   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:38.374339   57466 cri.go:89] found id: ""
	I0708 20:51:38.374363   57466 logs.go:276] 0 containers: []
	W0708 20:51:38.374372   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:38.374378   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:38.374426   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:38.406762   57466 cri.go:89] found id: ""
	I0708 20:51:38.406787   57466 logs.go:276] 0 containers: []
	W0708 20:51:38.406795   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:38.406802   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:38.406852   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:38.439908   57466 cri.go:89] found id: ""
	I0708 20:51:38.439932   57466 logs.go:276] 0 containers: []
	W0708 20:51:38.439941   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:38.439946   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:38.440008   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:38.474565   57466 cri.go:89] found id: ""
	I0708 20:51:38.474596   57466 logs.go:276] 0 containers: []
	W0708 20:51:38.474607   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:38.474616   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:38.474674   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:38.509837   57466 cri.go:89] found id: ""
	I0708 20:51:38.509873   57466 logs.go:276] 0 containers: []
	W0708 20:51:38.509885   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:38.509892   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:38.509950   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:38.544740   57466 cri.go:89] found id: ""
	I0708 20:51:38.544769   57466 logs.go:276] 0 containers: []
	W0708 20:51:38.544778   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:38.544785   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:38.544845   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:38.580553   57466 cri.go:89] found id: ""
	I0708 20:51:38.580578   57466 logs.go:276] 0 containers: []
	W0708 20:51:38.580586   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:38.580592   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:38.580649   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:38.613458   57466 cri.go:89] found id: ""
	I0708 20:51:38.613490   57466 logs.go:276] 0 containers: []
	W0708 20:51:38.613500   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:38.613511   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:38.613527   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:38.663335   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:38.663364   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:38.676942   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:38.676965   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:38.754183   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:38.754207   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:38.754218   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:38.833178   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:38.833211   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:41.373132   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:41.386492   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:41.386548   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:41.423554   57466 cri.go:89] found id: ""
	I0708 20:51:41.423580   57466 logs.go:276] 0 containers: []
	W0708 20:51:41.423588   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:41.423593   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:41.423651   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:41.456651   57466 cri.go:89] found id: ""
	I0708 20:51:41.456673   57466 logs.go:276] 0 containers: []
	W0708 20:51:41.456681   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:41.456685   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:41.456742   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:41.494798   57466 cri.go:89] found id: ""
	I0708 20:51:41.494830   57466 logs.go:276] 0 containers: []
	W0708 20:51:41.494842   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:41.494859   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:41.494918   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:41.531131   57466 cri.go:89] found id: ""
	I0708 20:51:41.531156   57466 logs.go:276] 0 containers: []
	W0708 20:51:41.531165   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:41.531171   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:41.531218   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:41.565547   57466 cri.go:89] found id: ""
	I0708 20:51:41.565570   57466 logs.go:276] 0 containers: []
	W0708 20:51:41.565580   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:41.565586   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:41.565649   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:41.599399   57466 cri.go:89] found id: ""
	I0708 20:51:41.599427   57466 logs.go:276] 0 containers: []
	W0708 20:51:41.599437   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:41.599444   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:41.599524   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:41.632500   57466 cri.go:89] found id: ""
	I0708 20:51:41.632524   57466 logs.go:276] 0 containers: []
	W0708 20:51:41.632533   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:41.632540   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:41.632598   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:41.665665   57466 cri.go:89] found id: ""
	I0708 20:51:41.665691   57466 logs.go:276] 0 containers: []
	W0708 20:51:41.665701   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:41.665712   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:41.665726   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:41.718171   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:41.718204   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:41.731492   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:41.731517   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:41.802943   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:41.802969   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:41.802981   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:41.875108   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:41.875140   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:44.414205   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:44.427399   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:44.427479   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:44.461746   57466 cri.go:89] found id: ""
	I0708 20:51:44.461771   57466 logs.go:276] 0 containers: []
	W0708 20:51:44.461780   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:44.461786   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:44.461843   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:44.499724   57466 cri.go:89] found id: ""
	I0708 20:51:44.499752   57466 logs.go:276] 0 containers: []
	W0708 20:51:44.499763   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:44.499771   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:44.499838   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:44.533559   57466 cri.go:89] found id: ""
	I0708 20:51:44.533581   57466 logs.go:276] 0 containers: []
	W0708 20:51:44.533588   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:44.533593   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:44.533653   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:44.572328   57466 cri.go:89] found id: ""
	I0708 20:51:44.572354   57466 logs.go:276] 0 containers: []
	W0708 20:51:44.572364   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:44.572371   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:44.572430   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:44.612217   57466 cri.go:89] found id: ""
	I0708 20:51:44.612243   57466 logs.go:276] 0 containers: []
	W0708 20:51:44.612251   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:44.612256   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:44.612317   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:44.647537   57466 cri.go:89] found id: ""
	I0708 20:51:44.647561   57466 logs.go:276] 0 containers: []
	W0708 20:51:44.647569   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:44.647574   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:44.647632   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:44.687087   57466 cri.go:89] found id: ""
	I0708 20:51:44.687113   57466 logs.go:276] 0 containers: []
	W0708 20:51:44.687122   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:44.687127   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:44.687181   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:44.724405   57466 cri.go:89] found id: ""
	I0708 20:51:44.724428   57466 logs.go:276] 0 containers: []
	W0708 20:51:44.724436   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:44.724445   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:44.724458   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:44.737468   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:44.737500   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:44.803606   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:44.803631   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:44.803643   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:44.890680   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:44.890716   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:44.928934   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:44.928968   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:47.480214   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:47.493785   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:47.493854   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:47.529934   57466 cri.go:89] found id: ""
	I0708 20:51:47.529965   57466 logs.go:276] 0 containers: []
	W0708 20:51:47.529976   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:47.529983   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:47.530043   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:47.565046   57466 cri.go:89] found id: ""
	I0708 20:51:47.565068   57466 logs.go:276] 0 containers: []
	W0708 20:51:47.565075   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:47.565081   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:47.565136   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:47.600733   57466 cri.go:89] found id: ""
	I0708 20:51:47.600757   57466 logs.go:276] 0 containers: []
	W0708 20:51:47.600765   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:47.600772   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:47.600831   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:47.635094   57466 cri.go:89] found id: ""
	I0708 20:51:47.635122   57466 logs.go:276] 0 containers: []
	W0708 20:51:47.635132   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:47.635139   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:47.635210   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:47.677332   57466 cri.go:89] found id: ""
	I0708 20:51:47.677359   57466 logs.go:276] 0 containers: []
	W0708 20:51:47.677370   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:47.677377   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:47.677439   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:47.712968   57466 cri.go:89] found id: ""
	I0708 20:51:47.712995   57466 logs.go:276] 0 containers: []
	W0708 20:51:47.713005   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:47.713011   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:47.713069   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:47.745657   57466 cri.go:89] found id: ""
	I0708 20:51:47.745681   57466 logs.go:276] 0 containers: []
	W0708 20:51:47.745689   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:47.745694   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:47.745754   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:47.780265   57466 cri.go:89] found id: ""
	I0708 20:51:47.780291   57466 logs.go:276] 0 containers: []
	W0708 20:51:47.780302   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:47.780313   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:47.780329   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:47.830667   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:47.830700   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:47.844145   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:47.844168   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:47.912403   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:47.912430   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:47.912447   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:47.986699   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:47.986736   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:50.528607   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:50.541541   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:50.541600   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:50.577191   57466 cri.go:89] found id: ""
	I0708 20:51:50.577228   57466 logs.go:276] 0 containers: []
	W0708 20:51:50.577236   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:50.577242   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:50.577295   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:50.620570   57466 cri.go:89] found id: ""
	I0708 20:51:50.620592   57466 logs.go:276] 0 containers: []
	W0708 20:51:50.620600   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:50.620605   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:50.620652   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:50.666847   57466 cri.go:89] found id: ""
	I0708 20:51:50.666874   57466 logs.go:276] 0 containers: []
	W0708 20:51:50.666885   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:50.666892   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:50.666939   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:50.704039   57466 cri.go:89] found id: ""
	I0708 20:51:50.704060   57466 logs.go:276] 0 containers: []
	W0708 20:51:50.704067   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:50.704072   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:50.704125   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:50.736580   57466 cri.go:89] found id: ""
	I0708 20:51:50.736607   57466 logs.go:276] 0 containers: []
	W0708 20:51:50.736617   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:50.736624   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:50.736685   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:50.772702   57466 cri.go:89] found id: ""
	I0708 20:51:50.772727   57466 logs.go:276] 0 containers: []
	W0708 20:51:50.772739   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:50.772745   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:50.772796   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:50.805565   57466 cri.go:89] found id: ""
	I0708 20:51:50.805589   57466 logs.go:276] 0 containers: []
	W0708 20:51:50.805597   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:50.805602   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:50.805664   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:50.839099   57466 cri.go:89] found id: ""
	I0708 20:51:50.839125   57466 logs.go:276] 0 containers: []
	W0708 20:51:50.839135   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:50.839145   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:50.839160   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:50.910350   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:50.910374   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:50.910389   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:50.988308   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:50.988342   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:51.024884   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:51.024911   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:51.071418   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:51.071458   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:53.584783   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:53.597732   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:53.597803   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:53.634824   57466 cri.go:89] found id: ""
	I0708 20:51:53.634848   57466 logs.go:276] 0 containers: []
	W0708 20:51:53.634856   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:53.634863   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:53.634919   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:53.670664   57466 cri.go:89] found id: ""
	I0708 20:51:53.670698   57466 logs.go:276] 0 containers: []
	W0708 20:51:53.670710   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:53.670718   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:53.670779   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:53.704505   57466 cri.go:89] found id: ""
	I0708 20:51:53.704531   57466 logs.go:276] 0 containers: []
	W0708 20:51:53.704538   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:53.704551   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:53.704609   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:53.738459   57466 cri.go:89] found id: ""
	I0708 20:51:53.738486   57466 logs.go:276] 0 containers: []
	W0708 20:51:53.738495   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:53.738500   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:53.738569   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:53.771069   57466 cri.go:89] found id: ""
	I0708 20:51:53.771101   57466 logs.go:276] 0 containers: []
	W0708 20:51:53.771112   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:53.771119   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:53.771173   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:53.808485   57466 cri.go:89] found id: ""
	I0708 20:51:53.808512   57466 logs.go:276] 0 containers: []
	W0708 20:51:53.808520   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:53.808526   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:53.808573   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:53.843742   57466 cri.go:89] found id: ""
	I0708 20:51:53.843776   57466 logs.go:276] 0 containers: []
	W0708 20:51:53.843786   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:53.843794   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:53.843856   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:53.878199   57466 cri.go:89] found id: ""
	I0708 20:51:53.878222   57466 logs.go:276] 0 containers: []
	W0708 20:51:53.878229   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:53.878241   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:53.878255   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:53.950266   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:53.950285   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:53.950297   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:54.031899   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:54.031943   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:54.070355   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:54.070386   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:54.117096   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:54.117125   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:56.631233   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:56.644533   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:56.644591   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:56.679071   57466 cri.go:89] found id: ""
	I0708 20:51:56.679100   57466 logs.go:276] 0 containers: []
	W0708 20:51:56.679110   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:56.679119   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:56.679189   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:56.713714   57466 cri.go:89] found id: ""
	I0708 20:51:56.713740   57466 logs.go:276] 0 containers: []
	W0708 20:51:56.713747   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:56.713757   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:56.713877   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:56.747349   57466 cri.go:89] found id: ""
	I0708 20:51:56.747375   57466 logs.go:276] 0 containers: []
	W0708 20:51:56.747383   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:56.747388   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:56.747440   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:56.782201   57466 cri.go:89] found id: ""
	I0708 20:51:56.782228   57466 logs.go:276] 0 containers: []
	W0708 20:51:56.782235   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:56.782240   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:56.782286   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:56.816869   57466 cri.go:89] found id: ""
	I0708 20:51:56.816901   57466 logs.go:276] 0 containers: []
	W0708 20:51:56.816911   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:56.816917   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:56.816968   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:56.852132   57466 cri.go:89] found id: ""
	I0708 20:51:56.852164   57466 logs.go:276] 0 containers: []
	W0708 20:51:56.852173   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:56.852179   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:56.852228   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:56.887053   57466 cri.go:89] found id: ""
	I0708 20:51:56.887084   57466 logs.go:276] 0 containers: []
	W0708 20:51:56.887093   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:56.887100   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:56.887153   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:56.921491   57466 cri.go:89] found id: ""
	I0708 20:51:56.921520   57466 logs.go:276] 0 containers: []
	W0708 20:51:56.921529   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:56.921539   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:51:56.921551   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:51:56.999598   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:51:56.999631   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:51:57.047262   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:57.047291   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:51:57.115267   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:51:57.115318   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:51:57.135213   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:51:57.135244   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:51:57.201218   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:51:59.701383   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:51:59.715228   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:51:59.715300   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:51:59.751108   57466 cri.go:89] found id: ""
	I0708 20:51:59.751135   57466 logs.go:276] 0 containers: []
	W0708 20:51:59.751142   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:51:59.751147   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:51:59.751203   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:51:59.789043   57466 cri.go:89] found id: ""
	I0708 20:51:59.789073   57466 logs.go:276] 0 containers: []
	W0708 20:51:59.789088   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:51:59.789095   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:51:59.789157   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:51:59.822989   57466 cri.go:89] found id: ""
	I0708 20:51:59.823018   57466 logs.go:276] 0 containers: []
	W0708 20:51:59.823028   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:51:59.823035   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:51:59.823095   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:51:59.858956   57466 cri.go:89] found id: ""
	I0708 20:51:59.858985   57466 logs.go:276] 0 containers: []
	W0708 20:51:59.858992   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:51:59.858997   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:51:59.859046   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:51:59.893938   57466 cri.go:89] found id: ""
	I0708 20:51:59.893969   57466 logs.go:276] 0 containers: []
	W0708 20:51:59.893977   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:51:59.893983   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:51:59.894040   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:51:59.929004   57466 cri.go:89] found id: ""
	I0708 20:51:59.929033   57466 logs.go:276] 0 containers: []
	W0708 20:51:59.929042   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:51:59.929048   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:51:59.929098   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:51:59.963597   57466 cri.go:89] found id: ""
	I0708 20:51:59.963625   57466 logs.go:276] 0 containers: []
	W0708 20:51:59.963633   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:51:59.963638   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:51:59.963698   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:51:59.999900   57466 cri.go:89] found id: ""
	I0708 20:51:59.999923   57466 logs.go:276] 0 containers: []
	W0708 20:51:59.999931   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:51:59.999940   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:51:59.999954   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:00.049203   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:00.049238   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:00.062832   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:00.062863   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:00.138620   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:00.138645   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:00.138660   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:00.211387   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:00.211419   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:02.750690   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:02.766346   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:02.766402   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:02.827409   57466 cri.go:89] found id: ""
	I0708 20:52:02.827443   57466 logs.go:276] 0 containers: []
	W0708 20:52:02.827464   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:02.827478   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:02.827534   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:02.884597   57466 cri.go:89] found id: ""
	I0708 20:52:02.884623   57466 logs.go:276] 0 containers: []
	W0708 20:52:02.884633   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:02.884640   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:02.884704   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:02.920877   57466 cri.go:89] found id: ""
	I0708 20:52:02.920902   57466 logs.go:276] 0 containers: []
	W0708 20:52:02.920911   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:02.920916   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:02.920968   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:02.956257   57466 cri.go:89] found id: ""
	I0708 20:52:02.956279   57466 logs.go:276] 0 containers: []
	W0708 20:52:02.956286   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:02.956291   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:02.956336   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:02.990345   57466 cri.go:89] found id: ""
	I0708 20:52:02.990378   57466 logs.go:276] 0 containers: []
	W0708 20:52:02.990388   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:02.990394   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:02.990450   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:03.023838   57466 cri.go:89] found id: ""
	I0708 20:52:03.023864   57466 logs.go:276] 0 containers: []
	W0708 20:52:03.023873   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:03.023879   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:03.023938   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:03.057587   57466 cri.go:89] found id: ""
	I0708 20:52:03.057616   57466 logs.go:276] 0 containers: []
	W0708 20:52:03.057626   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:03.057634   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:03.057713   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:03.090602   57466 cri.go:89] found id: ""
	I0708 20:52:03.090632   57466 logs.go:276] 0 containers: []
	W0708 20:52:03.090641   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:03.090651   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:03.090671   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:03.127931   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:03.127960   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:03.178820   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:03.178850   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:03.192277   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:03.192301   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:03.258913   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:03.258933   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:03.258948   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:05.833357   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:05.846584   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:05.846660   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:05.881882   57466 cri.go:89] found id: ""
	I0708 20:52:05.881912   57466 logs.go:276] 0 containers: []
	W0708 20:52:05.881922   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:05.881929   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:05.881985   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:05.915505   57466 cri.go:89] found id: ""
	I0708 20:52:05.915541   57466 logs.go:276] 0 containers: []
	W0708 20:52:05.915551   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:05.915558   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:05.915620   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:05.949843   57466 cri.go:89] found id: ""
	I0708 20:52:05.949869   57466 logs.go:276] 0 containers: []
	W0708 20:52:05.949879   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:05.949886   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:05.949944   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:05.985110   57466 cri.go:89] found id: ""
	I0708 20:52:05.985143   57466 logs.go:276] 0 containers: []
	W0708 20:52:05.985152   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:05.985159   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:05.985212   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:06.018558   57466 cri.go:89] found id: ""
	I0708 20:52:06.018586   57466 logs.go:276] 0 containers: []
	W0708 20:52:06.018594   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:06.018600   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:06.018651   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:06.054186   57466 cri.go:89] found id: ""
	I0708 20:52:06.054219   57466 logs.go:276] 0 containers: []
	W0708 20:52:06.054230   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:06.054238   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:06.054298   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:06.090863   57466 cri.go:89] found id: ""
	I0708 20:52:06.090886   57466 logs.go:276] 0 containers: []
	W0708 20:52:06.090895   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:06.090901   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:06.090959   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:06.124860   57466 cri.go:89] found id: ""
	I0708 20:52:06.124888   57466 logs.go:276] 0 containers: []
	W0708 20:52:06.124898   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:06.124913   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:06.124927   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:06.173624   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:06.173662   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:06.187811   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:06.187840   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:06.255831   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:06.255855   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:06.255872   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:06.334355   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:06.334389   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:08.873730   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:08.886977   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:08.887050   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:08.921828   57466 cri.go:89] found id: ""
	I0708 20:52:08.921872   57466 logs.go:276] 0 containers: []
	W0708 20:52:08.921884   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:08.921891   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:08.921955   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:08.956276   57466 cri.go:89] found id: ""
	I0708 20:52:08.956305   57466 logs.go:276] 0 containers: []
	W0708 20:52:08.956316   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:08.956323   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:08.956372   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:08.991826   57466 cri.go:89] found id: ""
	I0708 20:52:08.991853   57466 logs.go:276] 0 containers: []
	W0708 20:52:08.991863   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:08.991870   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:08.991929   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:09.026776   57466 cri.go:89] found id: ""
	I0708 20:52:09.026806   57466 logs.go:276] 0 containers: []
	W0708 20:52:09.026817   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:09.026825   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:09.026880   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:09.060104   57466 cri.go:89] found id: ""
	I0708 20:52:09.060130   57466 logs.go:276] 0 containers: []
	W0708 20:52:09.060139   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:09.060145   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:09.060205   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:09.095132   57466 cri.go:89] found id: ""
	I0708 20:52:09.095158   57466 logs.go:276] 0 containers: []
	W0708 20:52:09.095166   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:09.095172   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:09.095225   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:09.130125   57466 cri.go:89] found id: ""
	I0708 20:52:09.130155   57466 logs.go:276] 0 containers: []
	W0708 20:52:09.130164   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:09.130171   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:09.130234   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:09.163171   57466 cri.go:89] found id: ""
	I0708 20:52:09.163202   57466 logs.go:276] 0 containers: []
	W0708 20:52:09.163214   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:09.163226   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:09.163242   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:09.201161   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:09.201188   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:09.252214   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:09.252241   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:09.266223   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:09.266251   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:09.334035   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:09.334060   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:09.334075   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:11.912578   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:11.925914   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:11.925979   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:11.961959   57466 cri.go:89] found id: ""
	I0708 20:52:11.961986   57466 logs.go:276] 0 containers: []
	W0708 20:52:11.961994   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:11.962000   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:11.962051   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:11.996039   57466 cri.go:89] found id: ""
	I0708 20:52:11.996065   57466 logs.go:276] 0 containers: []
	W0708 20:52:11.996072   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:11.996078   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:11.996122   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:12.035472   57466 cri.go:89] found id: ""
	I0708 20:52:12.035497   57466 logs.go:276] 0 containers: []
	W0708 20:52:12.035507   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:12.035514   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:12.035571   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:12.070153   57466 cri.go:89] found id: ""
	I0708 20:52:12.070181   57466 logs.go:276] 0 containers: []
	W0708 20:52:12.070191   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:12.070199   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:12.070257   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:12.104521   57466 cri.go:89] found id: ""
	I0708 20:52:12.104547   57466 logs.go:276] 0 containers: []
	W0708 20:52:12.104558   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:12.104565   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:12.104617   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:12.141365   57466 cri.go:89] found id: ""
	I0708 20:52:12.141389   57466 logs.go:276] 0 containers: []
	W0708 20:52:12.141395   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:12.141402   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:12.141450   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:12.175256   57466 cri.go:89] found id: ""
	I0708 20:52:12.175280   57466 logs.go:276] 0 containers: []
	W0708 20:52:12.175288   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:12.175294   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:12.175337   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:12.209347   57466 cri.go:89] found id: ""
	I0708 20:52:12.209375   57466 logs.go:276] 0 containers: []
	W0708 20:52:12.209384   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:12.209395   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:12.209416   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:12.222842   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:12.222868   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:12.290527   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:12.290553   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:12.290569   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:12.364716   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:12.364755   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:12.403161   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:12.403217   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:14.954950   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:14.968137   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:14.968215   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:15.002244   57466 cri.go:89] found id: ""
	I0708 20:52:15.002269   57466 logs.go:276] 0 containers: []
	W0708 20:52:15.002277   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:15.002282   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:15.002333   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:15.036486   57466 cri.go:89] found id: ""
	I0708 20:52:15.036509   57466 logs.go:276] 0 containers: []
	W0708 20:52:15.036516   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:15.036521   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:15.036568   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:15.072628   57466 cri.go:89] found id: ""
	I0708 20:52:15.072661   57466 logs.go:276] 0 containers: []
	W0708 20:52:15.072672   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:15.072683   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:15.072732   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:15.110975   57466 cri.go:89] found id: ""
	I0708 20:52:15.111005   57466 logs.go:276] 0 containers: []
	W0708 20:52:15.111012   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:15.111018   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:15.111075   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:15.147412   57466 cri.go:89] found id: ""
	I0708 20:52:15.147441   57466 logs.go:276] 0 containers: []
	W0708 20:52:15.147469   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:15.147477   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:15.147539   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:15.187302   57466 cri.go:89] found id: ""
	I0708 20:52:15.187335   57466 logs.go:276] 0 containers: []
	W0708 20:52:15.187345   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:15.187350   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:15.187407   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:15.222296   57466 cri.go:89] found id: ""
	I0708 20:52:15.222327   57466 logs.go:276] 0 containers: []
	W0708 20:52:15.222339   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:15.222345   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:15.222426   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:15.257368   57466 cri.go:89] found id: ""
	I0708 20:52:15.257395   57466 logs.go:276] 0 containers: []
	W0708 20:52:15.257406   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:15.257416   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:15.257432   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:15.326061   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:15.326084   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:15.326098   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:15.410500   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:15.410533   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:15.455880   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:15.455908   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:15.507487   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:15.507526   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:18.022948   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:18.036584   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:18.036646   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:18.076446   57466 cri.go:89] found id: ""
	I0708 20:52:18.076471   57466 logs.go:276] 0 containers: []
	W0708 20:52:18.076479   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:18.076485   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:18.076535   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:18.111159   57466 cri.go:89] found id: ""
	I0708 20:52:18.111180   57466 logs.go:276] 0 containers: []
	W0708 20:52:18.111188   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:18.111193   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:18.111255   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:18.145465   57466 cri.go:89] found id: ""
	I0708 20:52:18.145499   57466 logs.go:276] 0 containers: []
	W0708 20:52:18.145508   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:18.145515   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:18.145569   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:18.180961   57466 cri.go:89] found id: ""
	I0708 20:52:18.180986   57466 logs.go:276] 0 containers: []
	W0708 20:52:18.180994   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:18.181000   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:18.181048   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:18.215203   57466 cri.go:89] found id: ""
	I0708 20:52:18.215232   57466 logs.go:276] 0 containers: []
	W0708 20:52:18.215239   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:18.215246   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:18.215294   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:18.249251   57466 cri.go:89] found id: ""
	I0708 20:52:18.249274   57466 logs.go:276] 0 containers: []
	W0708 20:52:18.249281   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:18.249287   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:18.249344   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:18.284027   57466 cri.go:89] found id: ""
	I0708 20:52:18.284050   57466 logs.go:276] 0 containers: []
	W0708 20:52:18.284058   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:18.284063   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:18.284111   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:18.321272   57466 cri.go:89] found id: ""
	I0708 20:52:18.321297   57466 logs.go:276] 0 containers: []
	W0708 20:52:18.321305   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:18.321318   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:18.321329   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:18.373932   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:18.373963   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:18.388741   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:18.388767   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:18.456051   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:18.456069   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:18.456083   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:18.529015   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:18.529058   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:21.069018   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:21.082213   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:21.082284   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:21.117850   57466 cri.go:89] found id: ""
	I0708 20:52:21.117875   57466 logs.go:276] 0 containers: []
	W0708 20:52:21.117886   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:21.117892   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:21.117951   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:21.150922   57466 cri.go:89] found id: ""
	I0708 20:52:21.150955   57466 logs.go:276] 0 containers: []
	W0708 20:52:21.150967   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:21.150975   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:21.151037   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:21.183826   57466 cri.go:89] found id: ""
	I0708 20:52:21.183856   57466 logs.go:276] 0 containers: []
	W0708 20:52:21.183867   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:21.183874   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:21.183941   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:21.218715   57466 cri.go:89] found id: ""
	I0708 20:52:21.218741   57466 logs.go:276] 0 containers: []
	W0708 20:52:21.218750   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:21.218755   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:21.218812   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:21.255796   57466 cri.go:89] found id: ""
	I0708 20:52:21.255821   57466 logs.go:276] 0 containers: []
	W0708 20:52:21.255828   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:21.255833   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:21.255892   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:21.288850   57466 cri.go:89] found id: ""
	I0708 20:52:21.288881   57466 logs.go:276] 0 containers: []
	W0708 20:52:21.288891   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:21.288898   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:21.288957   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:21.325880   57466 cri.go:89] found id: ""
	I0708 20:52:21.325908   57466 logs.go:276] 0 containers: []
	W0708 20:52:21.325918   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:21.325926   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:21.325985   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:21.360770   57466 cri.go:89] found id: ""
	I0708 20:52:21.360794   57466 logs.go:276] 0 containers: []
	W0708 20:52:21.360802   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:21.360810   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:21.360822   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:21.428304   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:21.428336   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:21.428350   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:21.501396   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:21.501436   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:21.540919   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:21.540954   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:21.589486   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:21.589519   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:24.103771   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:24.116584   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:24.116649   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:24.150376   57466 cri.go:89] found id: ""
	I0708 20:52:24.150409   57466 logs.go:276] 0 containers: []
	W0708 20:52:24.150420   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:24.150427   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:24.150481   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:24.184932   57466 cri.go:89] found id: ""
	I0708 20:52:24.184959   57466 logs.go:276] 0 containers: []
	W0708 20:52:24.184970   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:24.184977   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:24.185035   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:24.220704   57466 cri.go:89] found id: ""
	I0708 20:52:24.220733   57466 logs.go:276] 0 containers: []
	W0708 20:52:24.220741   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:24.220747   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:24.220800   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:24.258468   57466 cri.go:89] found id: ""
	I0708 20:52:24.258493   57466 logs.go:276] 0 containers: []
	W0708 20:52:24.258500   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:24.258505   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:24.258561   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:24.291602   57466 cri.go:89] found id: ""
	I0708 20:52:24.291625   57466 logs.go:276] 0 containers: []
	W0708 20:52:24.291633   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:24.291638   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:24.291684   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:24.324826   57466 cri.go:89] found id: ""
	I0708 20:52:24.324859   57466 logs.go:276] 0 containers: []
	W0708 20:52:24.324870   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:24.324881   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:24.324941   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:24.358691   57466 cri.go:89] found id: ""
	I0708 20:52:24.358722   57466 logs.go:276] 0 containers: []
	W0708 20:52:24.358733   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:24.358740   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:24.358793   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:24.392785   57466 cri.go:89] found id: ""
	I0708 20:52:24.392812   57466 logs.go:276] 0 containers: []
	W0708 20:52:24.392822   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:24.392832   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:24.392846   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:24.441519   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:24.441552   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:24.455507   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:24.455533   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:24.525547   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:24.525572   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:24.525584   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:24.597613   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:24.597646   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:27.136428   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:27.150215   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:27.150277   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:27.188040   57466 cri.go:89] found id: ""
	I0708 20:52:27.188066   57466 logs.go:276] 0 containers: []
	W0708 20:52:27.188077   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:27.188084   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:27.188138   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:27.222553   57466 cri.go:89] found id: ""
	I0708 20:52:27.222585   57466 logs.go:276] 0 containers: []
	W0708 20:52:27.222595   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:27.222603   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:27.222667   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:27.257263   57466 cri.go:89] found id: ""
	I0708 20:52:27.257292   57466 logs.go:276] 0 containers: []
	W0708 20:52:27.257301   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:27.257306   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:27.257359   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:27.293945   57466 cri.go:89] found id: ""
	I0708 20:52:27.293972   57466 logs.go:276] 0 containers: []
	W0708 20:52:27.293981   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:27.293986   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:27.294036   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:27.331260   57466 cri.go:89] found id: ""
	I0708 20:52:27.331288   57466 logs.go:276] 0 containers: []
	W0708 20:52:27.331301   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:27.331308   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:27.331366   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:27.365250   57466 cri.go:89] found id: ""
	I0708 20:52:27.365284   57466 logs.go:276] 0 containers: []
	W0708 20:52:27.365292   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:27.365298   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:27.365350   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:27.399259   57466 cri.go:89] found id: ""
	I0708 20:52:27.399288   57466 logs.go:276] 0 containers: []
	W0708 20:52:27.399295   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:27.399301   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:27.399354   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:27.432426   57466 cri.go:89] found id: ""
	I0708 20:52:27.432457   57466 logs.go:276] 0 containers: []
	W0708 20:52:27.432465   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:27.432473   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:27.432484   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:27.445185   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:27.445212   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:27.516780   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:27.516809   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:27.516824   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:27.589566   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:27.589597   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:27.628623   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:27.628654   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:30.181005   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:30.194781   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:30.194855   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:30.232180   57466 cri.go:89] found id: ""
	I0708 20:52:30.232204   57466 logs.go:276] 0 containers: []
	W0708 20:52:30.232212   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:30.232218   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:30.232267   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:30.268142   57466 cri.go:89] found id: ""
	I0708 20:52:30.268169   57466 logs.go:276] 0 containers: []
	W0708 20:52:30.268181   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:30.268188   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:30.268244   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:30.305835   57466 cri.go:89] found id: ""
	I0708 20:52:30.305863   57466 logs.go:276] 0 containers: []
	W0708 20:52:30.305874   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:30.305881   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:30.305954   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:30.340329   57466 cri.go:89] found id: ""
	I0708 20:52:30.340359   57466 logs.go:276] 0 containers: []
	W0708 20:52:30.340367   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:30.340372   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:30.340431   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:30.375162   57466 cri.go:89] found id: ""
	I0708 20:52:30.375193   57466 logs.go:276] 0 containers: []
	W0708 20:52:30.375205   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:30.375212   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:30.375272   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:30.409108   57466 cri.go:89] found id: ""
	I0708 20:52:30.409141   57466 logs.go:276] 0 containers: []
	W0708 20:52:30.409153   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:30.409160   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:30.409220   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:30.443886   57466 cri.go:89] found id: ""
	I0708 20:52:30.443911   57466 logs.go:276] 0 containers: []
	W0708 20:52:30.443920   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:30.443940   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:30.443999   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:30.482636   57466 cri.go:89] found id: ""
	I0708 20:52:30.482658   57466 logs.go:276] 0 containers: []
	W0708 20:52:30.482666   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:30.482674   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:30.482685   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:30.557487   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:30.557525   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:30.596535   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:30.596567   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:30.648070   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:30.648106   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:30.661601   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:30.661629   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:30.730096   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:33.230779   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:33.244247   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:33.244307   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:33.281928   57466 cri.go:89] found id: ""
	I0708 20:52:33.281957   57466 logs.go:276] 0 containers: []
	W0708 20:52:33.281967   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:33.281974   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:33.282034   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:33.315786   57466 cri.go:89] found id: ""
	I0708 20:52:33.315938   57466 logs.go:276] 0 containers: []
	W0708 20:52:33.315953   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:33.315961   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:33.316028   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:33.350752   57466 cri.go:89] found id: ""
	I0708 20:52:33.350785   57466 logs.go:276] 0 containers: []
	W0708 20:52:33.350793   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:33.350799   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:33.350848   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:33.385399   57466 cri.go:89] found id: ""
	I0708 20:52:33.385429   57466 logs.go:276] 0 containers: []
	W0708 20:52:33.385439   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:33.385446   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:33.385503   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:33.418686   57466 cri.go:89] found id: ""
	I0708 20:52:33.418713   57466 logs.go:276] 0 containers: []
	W0708 20:52:33.418720   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:33.418725   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:33.418773   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:33.453717   57466 cri.go:89] found id: ""
	I0708 20:52:33.453745   57466 logs.go:276] 0 containers: []
	W0708 20:52:33.453754   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:33.453759   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:33.453810   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:33.486160   57466 cri.go:89] found id: ""
	I0708 20:52:33.486189   57466 logs.go:276] 0 containers: []
	W0708 20:52:33.486197   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:33.486203   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:33.486278   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:33.528764   57466 cri.go:89] found id: ""
	I0708 20:52:33.528799   57466 logs.go:276] 0 containers: []
	W0708 20:52:33.528810   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:33.528822   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:33.528836   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:33.582119   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:33.582160   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:33.596214   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:33.596242   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:33.664925   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:33.664948   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:33.664962   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:33.743690   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:33.743728   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:36.291702   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:36.305056   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:36.305123   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:36.340357   57466 cri.go:89] found id: ""
	I0708 20:52:36.340380   57466 logs.go:276] 0 containers: []
	W0708 20:52:36.340389   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:36.340402   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:36.340459   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:36.374425   57466 cri.go:89] found id: ""
	I0708 20:52:36.374451   57466 logs.go:276] 0 containers: []
	W0708 20:52:36.374462   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:36.374470   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:36.374525   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:36.406715   57466 cri.go:89] found id: ""
	I0708 20:52:36.406746   57466 logs.go:276] 0 containers: []
	W0708 20:52:36.406758   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:36.406764   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:36.406822   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:36.441618   57466 cri.go:89] found id: ""
	I0708 20:52:36.441645   57466 logs.go:276] 0 containers: []
	W0708 20:52:36.441654   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:36.441661   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:36.441722   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:36.482741   57466 cri.go:89] found id: ""
	I0708 20:52:36.482770   57466 logs.go:276] 0 containers: []
	W0708 20:52:36.482780   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:36.482788   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:36.482854   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:36.522695   57466 cri.go:89] found id: ""
	I0708 20:52:36.522725   57466 logs.go:276] 0 containers: []
	W0708 20:52:36.522736   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:36.522744   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:36.522808   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:36.557344   57466 cri.go:89] found id: ""
	I0708 20:52:36.557372   57466 logs.go:276] 0 containers: []
	W0708 20:52:36.557381   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:36.557388   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:36.557457   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:36.596393   57466 cri.go:89] found id: ""
	I0708 20:52:36.596422   57466 logs.go:276] 0 containers: []
	W0708 20:52:36.596434   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:36.596444   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:36.596464   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:36.683875   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:36.683898   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:36.683914   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:36.760234   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:36.760267   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:36.811930   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:36.811965   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:36.861620   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:36.861652   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:39.379725   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:39.392522   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:39.392588   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:39.425462   57466 cri.go:89] found id: ""
	I0708 20:52:39.425490   57466 logs.go:276] 0 containers: []
	W0708 20:52:39.425500   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:39.425508   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:39.425569   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:39.458393   57466 cri.go:89] found id: ""
	I0708 20:52:39.458426   57466 logs.go:276] 0 containers: []
	W0708 20:52:39.458438   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:39.458444   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:39.458498   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:39.490634   57466 cri.go:89] found id: ""
	I0708 20:52:39.490658   57466 logs.go:276] 0 containers: []
	W0708 20:52:39.490666   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:39.490671   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:39.490724   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:39.522910   57466 cri.go:89] found id: ""
	I0708 20:52:39.522941   57466 logs.go:276] 0 containers: []
	W0708 20:52:39.522949   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:39.522955   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:39.523001   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:39.556659   57466 cri.go:89] found id: ""
	I0708 20:52:39.556682   57466 logs.go:276] 0 containers: []
	W0708 20:52:39.556689   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:39.556694   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:39.556755   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:39.592341   57466 cri.go:89] found id: ""
	I0708 20:52:39.592368   57466 logs.go:276] 0 containers: []
	W0708 20:52:39.592378   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:39.592386   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:39.592448   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:39.635093   57466 cri.go:89] found id: ""
	I0708 20:52:39.635122   57466 logs.go:276] 0 containers: []
	W0708 20:52:39.635131   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:39.635136   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:39.635192   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:39.669494   57466 cri.go:89] found id: ""
	I0708 20:52:39.669522   57466 logs.go:276] 0 containers: []
	W0708 20:52:39.669530   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:39.669538   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:39.669550   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:39.717843   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:39.717880   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:39.731883   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:39.731905   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:39.799920   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:39.799946   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:39.799961   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:39.881561   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:39.881608   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:42.423505   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:42.436970   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:42.437036   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:42.474094   57466 cri.go:89] found id: ""
	I0708 20:52:42.474125   57466 logs.go:276] 0 containers: []
	W0708 20:52:42.474135   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:42.474143   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:42.474204   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:42.509031   57466 cri.go:89] found id: ""
	I0708 20:52:42.509058   57466 logs.go:276] 0 containers: []
	W0708 20:52:42.509067   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:42.509074   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:42.509144   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:42.543293   57466 cri.go:89] found id: ""
	I0708 20:52:42.543318   57466 logs.go:276] 0 containers: []
	W0708 20:52:42.543329   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:42.543335   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:42.543397   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:42.576409   57466 cri.go:89] found id: ""
	I0708 20:52:42.576437   57466 logs.go:276] 0 containers: []
	W0708 20:52:42.576447   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:42.576455   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:42.576515   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:42.610236   57466 cri.go:89] found id: ""
	I0708 20:52:42.610265   57466 logs.go:276] 0 containers: []
	W0708 20:52:42.610278   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:42.610285   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:42.610350   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:42.644878   57466 cri.go:89] found id: ""
	I0708 20:52:42.644904   57466 logs.go:276] 0 containers: []
	W0708 20:52:42.644914   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:42.644922   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:42.644993   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:42.678177   57466 cri.go:89] found id: ""
	I0708 20:52:42.678203   57466 logs.go:276] 0 containers: []
	W0708 20:52:42.678213   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:42.678219   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:42.678278   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:42.710755   57466 cri.go:89] found id: ""
	I0708 20:52:42.710776   57466 logs.go:276] 0 containers: []
	W0708 20:52:42.710784   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:42.710792   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:42.710805   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:42.723774   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:42.723797   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:42.791356   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:42.791377   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:42.791389   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:42.864566   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:42.864600   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:42.905049   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:42.905074   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:45.456514   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:45.470789   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:45.470869   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:45.507723   57466 cri.go:89] found id: ""
	I0708 20:52:45.507747   57466 logs.go:276] 0 containers: []
	W0708 20:52:45.507755   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:45.507760   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:45.507807   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:45.541847   57466 cri.go:89] found id: ""
	I0708 20:52:45.541870   57466 logs.go:276] 0 containers: []
	W0708 20:52:45.541877   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:45.541882   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:45.541930   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:45.585867   57466 cri.go:89] found id: ""
	I0708 20:52:45.585889   57466 logs.go:276] 0 containers: []
	W0708 20:52:45.585896   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:45.585902   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:45.585947   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:45.621057   57466 cri.go:89] found id: ""
	I0708 20:52:45.621088   57466 logs.go:276] 0 containers: []
	W0708 20:52:45.621098   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:45.621106   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:45.621180   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:45.655142   57466 cri.go:89] found id: ""
	I0708 20:52:45.655167   57466 logs.go:276] 0 containers: []
	W0708 20:52:45.655175   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:45.655180   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:45.655230   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:45.688145   57466 cri.go:89] found id: ""
	I0708 20:52:45.688172   57466 logs.go:276] 0 containers: []
	W0708 20:52:45.688179   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:45.688184   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:45.688231   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:45.720283   57466 cri.go:89] found id: ""
	I0708 20:52:45.720307   57466 logs.go:276] 0 containers: []
	W0708 20:52:45.720314   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:45.720320   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:45.720366   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:45.754217   57466 cri.go:89] found id: ""
	I0708 20:52:45.754250   57466 logs.go:276] 0 containers: []
	W0708 20:52:45.754261   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:45.754271   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:45.754285   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:45.804135   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:45.804170   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:45.817918   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:45.817944   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:45.885492   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:45.885518   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:45.885532   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:45.964009   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:45.964048   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:48.506833   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:48.521125   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:48.521201   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:48.559984   57466 cri.go:89] found id: ""
	I0708 20:52:48.560006   57466 logs.go:276] 0 containers: []
	W0708 20:52:48.560013   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:48.560018   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:48.560067   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:48.593115   57466 cri.go:89] found id: ""
	I0708 20:52:48.593143   57466 logs.go:276] 0 containers: []
	W0708 20:52:48.593154   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:48.593161   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:48.593223   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:48.625987   57466 cri.go:89] found id: ""
	I0708 20:52:48.626010   57466 logs.go:276] 0 containers: []
	W0708 20:52:48.626018   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:48.626024   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:48.626070   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:48.665295   57466 cri.go:89] found id: ""
	I0708 20:52:48.665322   57466 logs.go:276] 0 containers: []
	W0708 20:52:48.665331   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:48.665336   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:48.665390   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:48.703265   57466 cri.go:89] found id: ""
	I0708 20:52:48.703286   57466 logs.go:276] 0 containers: []
	W0708 20:52:48.703294   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:48.703300   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:48.703346   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:48.739052   57466 cri.go:89] found id: ""
	I0708 20:52:48.739080   57466 logs.go:276] 0 containers: []
	W0708 20:52:48.739091   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:48.739098   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:48.739158   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:48.782692   57466 cri.go:89] found id: ""
	I0708 20:52:48.782724   57466 logs.go:276] 0 containers: []
	W0708 20:52:48.782736   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:48.782744   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:48.782804   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:48.830272   57466 cri.go:89] found id: ""
	I0708 20:52:48.830304   57466 logs.go:276] 0 containers: []
	W0708 20:52:48.830315   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:48.830326   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:48.830339   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:48.847612   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:48.847637   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:48.929114   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:48.929155   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:48.929174   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:49.007340   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:49.007380   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:49.044283   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:49.044309   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:51.594352   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:51.607562   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:51.607631   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:51.642368   57466 cri.go:89] found id: ""
	I0708 20:52:51.642395   57466 logs.go:276] 0 containers: []
	W0708 20:52:51.642403   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:51.642409   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:51.642461   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:51.674808   57466 cri.go:89] found id: ""
	I0708 20:52:51.674842   57466 logs.go:276] 0 containers: []
	W0708 20:52:51.674851   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:51.674858   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:51.674917   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:51.714867   57466 cri.go:89] found id: ""
	I0708 20:52:51.714892   57466 logs.go:276] 0 containers: []
	W0708 20:52:51.714899   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:51.714904   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:51.714965   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:51.753605   57466 cri.go:89] found id: ""
	I0708 20:52:51.753635   57466 logs.go:276] 0 containers: []
	W0708 20:52:51.753645   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:51.753652   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:51.753710   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:51.788752   57466 cri.go:89] found id: ""
	I0708 20:52:51.788781   57466 logs.go:276] 0 containers: []
	W0708 20:52:51.788789   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:51.788794   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:51.788847   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:51.832958   57466 cri.go:89] found id: ""
	I0708 20:52:51.832992   57466 logs.go:276] 0 containers: []
	W0708 20:52:51.833006   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:51.833018   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:51.833085   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:51.867109   57466 cri.go:89] found id: ""
	I0708 20:52:51.867133   57466 logs.go:276] 0 containers: []
	W0708 20:52:51.867140   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:51.867146   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:51.867193   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:51.898951   57466 cri.go:89] found id: ""
	I0708 20:52:51.898980   57466 logs.go:276] 0 containers: []
	W0708 20:52:51.898990   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:51.899000   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:51.899014   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:51.949335   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:51.949366   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:51.962727   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:51.962752   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:52.034942   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:52.034961   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:52.034973   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:52.111478   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:52.111512   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:54.647623   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:54.662889   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:54.662967   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:54.716176   57466 cri.go:89] found id: ""
	I0708 20:52:54.716198   57466 logs.go:276] 0 containers: []
	W0708 20:52:54.716207   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:54.716214   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:54.716272   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:54.781085   57466 cri.go:89] found id: ""
	I0708 20:52:54.781118   57466 logs.go:276] 0 containers: []
	W0708 20:52:54.781125   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:54.781132   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:54.781198   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:54.815634   57466 cri.go:89] found id: ""
	I0708 20:52:54.815664   57466 logs.go:276] 0 containers: []
	W0708 20:52:54.815672   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:54.815678   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:54.815734   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:54.850467   57466 cri.go:89] found id: ""
	I0708 20:52:54.850498   57466 logs.go:276] 0 containers: []
	W0708 20:52:54.850506   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:54.850511   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:54.850559   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:54.884150   57466 cri.go:89] found id: ""
	I0708 20:52:54.884188   57466 logs.go:276] 0 containers: []
	W0708 20:52:54.884199   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:54.884206   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:54.884268   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:54.918570   57466 cri.go:89] found id: ""
	I0708 20:52:54.918597   57466 logs.go:276] 0 containers: []
	W0708 20:52:54.918605   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:54.918613   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:54.918663   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:54.951608   57466 cri.go:89] found id: ""
	I0708 20:52:54.951629   57466 logs.go:276] 0 containers: []
	W0708 20:52:54.951637   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:54.951642   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:54.951688   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:54.989675   57466 cri.go:89] found id: ""
	I0708 20:52:54.989701   57466 logs.go:276] 0 containers: []
	W0708 20:52:54.989708   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:54.989717   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:54.989728   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:55.002482   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:55.002504   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:55.072873   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:55.072892   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:55.072905   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:55.151018   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:55.151051   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:55.190608   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:55.190643   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:52:57.742450   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:52:57.755783   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:52:57.755851   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:52:57.789202   57466 cri.go:89] found id: ""
	I0708 20:52:57.789234   57466 logs.go:276] 0 containers: []
	W0708 20:52:57.789244   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:52:57.789250   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:52:57.789314   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:52:57.822608   57466 cri.go:89] found id: ""
	I0708 20:52:57.822634   57466 logs.go:276] 0 containers: []
	W0708 20:52:57.822642   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:52:57.822647   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:52:57.822708   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:52:57.856591   57466 cri.go:89] found id: ""
	I0708 20:52:57.856629   57466 logs.go:276] 0 containers: []
	W0708 20:52:57.856640   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:52:57.856650   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:52:57.856712   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:52:57.888931   57466 cri.go:89] found id: ""
	I0708 20:52:57.888956   57466 logs.go:276] 0 containers: []
	W0708 20:52:57.888964   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:52:57.888969   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:52:57.889025   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:52:57.922216   57466 cri.go:89] found id: ""
	I0708 20:52:57.922246   57466 logs.go:276] 0 containers: []
	W0708 20:52:57.922257   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:52:57.922264   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:52:57.922328   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:52:57.956726   57466 cri.go:89] found id: ""
	I0708 20:52:57.956749   57466 logs.go:276] 0 containers: []
	W0708 20:52:57.956756   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:52:57.956762   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:52:57.956815   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:52:57.990662   57466 cri.go:89] found id: ""
	I0708 20:52:57.990693   57466 logs.go:276] 0 containers: []
	W0708 20:52:57.990703   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:52:57.990710   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:52:57.990771   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:52:58.024835   57466 cri.go:89] found id: ""
	I0708 20:52:58.024861   57466 logs.go:276] 0 containers: []
	W0708 20:52:58.024874   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:52:58.024883   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:52:58.024896   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:52:58.039068   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:52:58.039096   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:52:58.118188   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:52:58.118209   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:52:58.118220   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:52:58.189024   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:52:58.189055   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:52:58.228818   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:52:58.228843   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:53:00.779094   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:53:00.793048   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:53:00.793108   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:53:00.827601   57466 cri.go:89] found id: ""
	I0708 20:53:00.827631   57466 logs.go:276] 0 containers: []
	W0708 20:53:00.827639   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:53:00.827644   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:53:00.827700   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:53:00.861285   57466 cri.go:89] found id: ""
	I0708 20:53:00.861311   57466 logs.go:276] 0 containers: []
	W0708 20:53:00.861318   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:53:00.861324   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:53:00.861408   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:53:00.893992   57466 cri.go:89] found id: ""
	I0708 20:53:00.894022   57466 logs.go:276] 0 containers: []
	W0708 20:53:00.894032   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:53:00.894039   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:53:00.894097   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:53:00.929847   57466 cri.go:89] found id: ""
	I0708 20:53:00.929874   57466 logs.go:276] 0 containers: []
	W0708 20:53:00.929884   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:53:00.929890   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:53:00.929947   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:53:00.974405   57466 cri.go:89] found id: ""
	I0708 20:53:00.974434   57466 logs.go:276] 0 containers: []
	W0708 20:53:00.974442   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:53:00.974448   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:53:00.974508   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:53:01.012474   57466 cri.go:89] found id: ""
	I0708 20:53:01.012500   57466 logs.go:276] 0 containers: []
	W0708 20:53:01.012510   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:53:01.012516   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:53:01.012578   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:53:01.046042   57466 cri.go:89] found id: ""
	I0708 20:53:01.046069   57466 logs.go:276] 0 containers: []
	W0708 20:53:01.046079   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:53:01.046085   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:53:01.046148   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:53:01.082764   57466 cri.go:89] found id: ""
	I0708 20:53:01.082795   57466 logs.go:276] 0 containers: []
	W0708 20:53:01.082805   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:53:01.082817   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:53:01.082832   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:53:01.162310   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:53:01.162342   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:53:01.201164   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:53:01.201192   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:53:01.255129   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:53:01.255167   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:53:01.268772   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:53:01.268798   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:53:01.339380   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:53:03.840551   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:53:03.854313   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:53:03.854392   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:53:03.887127   57466 cri.go:89] found id: ""
	I0708 20:53:03.887154   57466 logs.go:276] 0 containers: []
	W0708 20:53:03.887161   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:53:03.887167   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:53:03.887225   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:53:03.920876   57466 cri.go:89] found id: ""
	I0708 20:53:03.920902   57466 logs.go:276] 0 containers: []
	W0708 20:53:03.920908   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:53:03.920913   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:53:03.920960   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:53:03.952696   57466 cri.go:89] found id: ""
	I0708 20:53:03.952732   57466 logs.go:276] 0 containers: []
	W0708 20:53:03.952742   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:53:03.952750   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:53:03.952815   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:53:03.987435   57466 cri.go:89] found id: ""
	I0708 20:53:03.987473   57466 logs.go:276] 0 containers: []
	W0708 20:53:03.987488   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:53:03.987498   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:53:03.987557   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:53:04.021381   57466 cri.go:89] found id: ""
	I0708 20:53:04.021403   57466 logs.go:276] 0 containers: []
	W0708 20:53:04.021411   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:53:04.021416   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:53:04.021471   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:53:04.055786   57466 cri.go:89] found id: ""
	I0708 20:53:04.055818   57466 logs.go:276] 0 containers: []
	W0708 20:53:04.055827   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:53:04.055833   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:53:04.055905   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:53:04.088351   57466 cri.go:89] found id: ""
	I0708 20:53:04.088374   57466 logs.go:276] 0 containers: []
	W0708 20:53:04.088381   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:53:04.088387   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:53:04.088443   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:53:04.124590   57466 cri.go:89] found id: ""
	I0708 20:53:04.124621   57466 logs.go:276] 0 containers: []
	W0708 20:53:04.124631   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:53:04.124642   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:53:04.124656   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:53:04.175973   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:53:04.176011   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:53:04.189860   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:53:04.189883   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:53:04.264562   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:53:04.264594   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:53:04.264609   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:53:04.344684   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:53:04.344726   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:53:06.882531   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:53:06.895508   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:53:06.895582   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:53:06.929999   57466 cri.go:89] found id: ""
	I0708 20:53:06.930028   57466 logs.go:276] 0 containers: []
	W0708 20:53:06.930038   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:53:06.930045   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:53:06.930102   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:53:06.965925   57466 cri.go:89] found id: ""
	I0708 20:53:06.965958   57466 logs.go:276] 0 containers: []
	W0708 20:53:06.965968   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:53:06.965975   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:53:06.966031   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:53:07.001047   57466 cri.go:89] found id: ""
	I0708 20:53:07.001080   57466 logs.go:276] 0 containers: []
	W0708 20:53:07.001091   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:53:07.001098   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:53:07.001162   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:53:07.037167   57466 cri.go:89] found id: ""
	I0708 20:53:07.037193   57466 logs.go:276] 0 containers: []
	W0708 20:53:07.037201   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:53:07.037207   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:53:07.037259   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:53:07.072266   57466 cri.go:89] found id: ""
	I0708 20:53:07.072289   57466 logs.go:276] 0 containers: []
	W0708 20:53:07.072296   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:53:07.072301   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:53:07.072347   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:53:07.108799   57466 cri.go:89] found id: ""
	I0708 20:53:07.108824   57466 logs.go:276] 0 containers: []
	W0708 20:53:07.108835   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:53:07.108843   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:53:07.108902   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:53:07.142103   57466 cri.go:89] found id: ""
	I0708 20:53:07.142132   57466 logs.go:276] 0 containers: []
	W0708 20:53:07.142143   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:53:07.142150   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:53:07.142213   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:53:07.176800   57466 cri.go:89] found id: ""
	I0708 20:53:07.176825   57466 logs.go:276] 0 containers: []
	W0708 20:53:07.176833   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:53:07.176842   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:53:07.176852   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:53:07.226868   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:53:07.226904   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:53:07.240806   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:53:07.240832   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:53:07.308211   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:53:07.308237   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:53:07.308255   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:53:07.386191   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:53:07.386230   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:53:09.924416   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:53:09.938235   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:53:09.938308   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:53:09.972195   57466 cri.go:89] found id: ""
	I0708 20:53:09.972218   57466 logs.go:276] 0 containers: []
	W0708 20:53:09.972228   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:53:09.972235   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:53:09.972294   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:53:10.005308   57466 cri.go:89] found id: ""
	I0708 20:53:10.005339   57466 logs.go:276] 0 containers: []
	W0708 20:53:10.005358   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:53:10.005366   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:53:10.005426   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:53:10.039410   57466 cri.go:89] found id: ""
	I0708 20:53:10.039438   57466 logs.go:276] 0 containers: []
	W0708 20:53:10.039445   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:53:10.039459   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:53:10.039507   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:53:10.078391   57466 cri.go:89] found id: ""
	I0708 20:53:10.078415   57466 logs.go:276] 0 containers: []
	W0708 20:53:10.078423   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:53:10.078428   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:53:10.078476   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:53:10.113736   57466 cri.go:89] found id: ""
	I0708 20:53:10.113760   57466 logs.go:276] 0 containers: []
	W0708 20:53:10.113767   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:53:10.113772   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:53:10.113818   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:53:10.148608   57466 cri.go:89] found id: ""
	I0708 20:53:10.148629   57466 logs.go:276] 0 containers: []
	W0708 20:53:10.148635   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:53:10.148641   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:53:10.148684   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:53:10.181285   57466 cri.go:89] found id: ""
	I0708 20:53:10.181306   57466 logs.go:276] 0 containers: []
	W0708 20:53:10.181313   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:53:10.181321   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:53:10.181365   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:53:10.215807   57466 cri.go:89] found id: ""
	I0708 20:53:10.215834   57466 logs.go:276] 0 containers: []
	W0708 20:53:10.215847   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:53:10.215859   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:53:10.215875   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:53:10.263790   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:53:10.263822   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:53:10.276815   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:53:10.276842   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:53:10.351822   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:53:10.351847   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:53:10.351861   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:53:10.423884   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:53:10.423924   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:53:12.965102   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:53:12.978471   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:53:12.978539   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:53:13.013944   57466 cri.go:89] found id: ""
	I0708 20:53:13.013971   57466 logs.go:276] 0 containers: []
	W0708 20:53:13.013981   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:53:13.013989   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:53:13.014053   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:53:13.046605   57466 cri.go:89] found id: ""
	I0708 20:53:13.046629   57466 logs.go:276] 0 containers: []
	W0708 20:53:13.046640   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:53:13.046647   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:53:13.046707   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:53:13.081794   57466 cri.go:89] found id: ""
	I0708 20:53:13.081824   57466 logs.go:276] 0 containers: []
	W0708 20:53:13.081835   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:53:13.081842   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:53:13.081901   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:53:13.116842   57466 cri.go:89] found id: ""
	I0708 20:53:13.116870   57466 logs.go:276] 0 containers: []
	W0708 20:53:13.116881   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:53:13.116887   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:53:13.116949   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:53:13.154693   57466 cri.go:89] found id: ""
	I0708 20:53:13.154727   57466 logs.go:276] 0 containers: []
	W0708 20:53:13.154738   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:53:13.154745   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:53:13.154806   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:53:13.191314   57466 cri.go:89] found id: ""
	I0708 20:53:13.191346   57466 logs.go:276] 0 containers: []
	W0708 20:53:13.191356   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:53:13.191363   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:53:13.191425   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:53:13.226269   57466 cri.go:89] found id: ""
	I0708 20:53:13.226297   57466 logs.go:276] 0 containers: []
	W0708 20:53:13.226307   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:53:13.226313   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:53:13.226372   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:53:13.262306   57466 cri.go:89] found id: ""
	I0708 20:53:13.262339   57466 logs.go:276] 0 containers: []
	W0708 20:53:13.262349   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:53:13.262365   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:53:13.262381   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:53:13.313272   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:53:13.313305   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:53:13.326478   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:53:13.326505   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:53:13.398008   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:53:13.398037   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:53:13.398052   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:53:13.475874   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:53:13.475910   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:53:16.016380   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:53:16.030613   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:53:16.030681   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:53:16.065248   57466 cri.go:89] found id: ""
	I0708 20:53:16.065272   57466 logs.go:276] 0 containers: []
	W0708 20:53:16.065280   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:53:16.065285   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:53:16.065333   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:53:16.101160   57466 cri.go:89] found id: ""
	I0708 20:53:16.101187   57466 logs.go:276] 0 containers: []
	W0708 20:53:16.101197   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:53:16.101204   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:53:16.101265   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:53:16.135357   57466 cri.go:89] found id: ""
	I0708 20:53:16.135391   57466 logs.go:276] 0 containers: []
	W0708 20:53:16.135401   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:53:16.135409   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:53:16.135483   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:53:16.170713   57466 cri.go:89] found id: ""
	I0708 20:53:16.170737   57466 logs.go:276] 0 containers: []
	W0708 20:53:16.170747   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:53:16.170754   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:53:16.170812   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:53:16.205763   57466 cri.go:89] found id: ""
	I0708 20:53:16.205790   57466 logs.go:276] 0 containers: []
	W0708 20:53:16.205800   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:53:16.205808   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:53:16.205868   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:53:16.242525   57466 cri.go:89] found id: ""
	I0708 20:53:16.242553   57466 logs.go:276] 0 containers: []
	W0708 20:53:16.242561   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:53:16.242567   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:53:16.242630   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:53:16.276481   57466 cri.go:89] found id: ""
	I0708 20:53:16.276504   57466 logs.go:276] 0 containers: []
	W0708 20:53:16.276512   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:53:16.276516   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:53:16.276562   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:53:16.310592   57466 cri.go:89] found id: ""
	I0708 20:53:16.310615   57466 logs.go:276] 0 containers: []
	W0708 20:53:16.310622   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:53:16.310629   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:53:16.310640   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:53:16.361383   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:53:16.361425   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:53:16.375080   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:53:16.375106   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:53:16.445751   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:53:16.445770   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:53:16.445781   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:53:16.522611   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:53:16.522647   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:53:19.059733   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:53:19.073237   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:53:19.073321   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:53:19.109101   57466 cri.go:89] found id: ""
	I0708 20:53:19.109127   57466 logs.go:276] 0 containers: []
	W0708 20:53:19.109135   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:53:19.109140   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:53:19.109206   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:53:19.143083   57466 cri.go:89] found id: ""
	I0708 20:53:19.143124   57466 logs.go:276] 0 containers: []
	W0708 20:53:19.143135   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:53:19.143141   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:53:19.143206   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:53:19.180106   57466 cri.go:89] found id: ""
	I0708 20:53:19.180146   57466 logs.go:276] 0 containers: []
	W0708 20:53:19.180157   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:53:19.180169   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:53:19.180229   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:53:19.214335   57466 cri.go:89] found id: ""
	I0708 20:53:19.214360   57466 logs.go:276] 0 containers: []
	W0708 20:53:19.214368   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:53:19.214374   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:53:19.214426   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:53:19.248595   57466 cri.go:89] found id: ""
	I0708 20:53:19.248621   57466 logs.go:276] 0 containers: []
	W0708 20:53:19.248632   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:53:19.248639   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:53:19.248698   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:53:19.282168   57466 cri.go:89] found id: ""
	I0708 20:53:19.282193   57466 logs.go:276] 0 containers: []
	W0708 20:53:19.282200   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:53:19.282206   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:53:19.282253   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:53:19.319181   57466 cri.go:89] found id: ""
	I0708 20:53:19.319203   57466 logs.go:276] 0 containers: []
	W0708 20:53:19.319210   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:53:19.319215   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:53:19.319260   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:53:19.354243   57466 cri.go:89] found id: ""
	I0708 20:53:19.354267   57466 logs.go:276] 0 containers: []
	W0708 20:53:19.354273   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:53:19.354282   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:53:19.354294   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 20:53:19.405341   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:53:19.405375   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:53:19.418523   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:53:19.418554   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:53:19.492123   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:53:19.492151   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:53:19.492166   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:53:19.570107   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:53:19.570140   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:53:22.110864   57466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:53:22.124300   57466 kubeadm.go:591] duration metric: took 4m2.375084001s to restartPrimaryControlPlane
	W0708 20:53:22.124385   57466 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 20:53:22.124421   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 20:53:22.917915   57466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:53:22.932684   57466 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:53:22.943078   57466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:53:22.953356   57466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:53:22.953373   57466 kubeadm.go:156] found existing configuration files:
	
	I0708 20:53:22.953418   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:53:22.963021   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:53:22.963085   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:53:22.973706   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:53:22.983988   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:53:22.984040   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:53:22.994589   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:53:23.004248   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:53:23.004317   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:53:23.014414   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:53:23.024414   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:53:23.024479   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:53:23.035170   57466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 20:53:23.255443   57466 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 20:55:19.358315   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:55:19.358408   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:55:19.359948   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:55:19.360000   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:55:19.360076   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:55:19.360217   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:55:19.360354   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:55:19.360443   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:55:19.362594   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:55:19.362671   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:55:19.362761   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:55:19.362915   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:55:19.362997   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:55:19.363087   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:55:19.363181   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:55:19.363271   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:55:19.363360   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:55:19.363470   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:55:19.363582   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:55:19.363636   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:55:19.363711   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:55:19.363781   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:55:19.363852   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:55:19.363941   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:55:19.364010   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:55:19.364135   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:55:19.364226   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:55:19.364276   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:55:19.364342   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:55:19.366132   57466 out.go:204]   - Booting up control plane ...
	I0708 20:55:19.366219   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:55:19.366301   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:55:19.366364   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:55:19.366433   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:55:19.366579   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:55:19.366629   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:55:19.366692   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.366846   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.366909   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367070   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367133   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367285   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367344   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367511   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367575   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367735   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367743   57466 kubeadm.go:309] 
	I0708 20:55:19.367783   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:55:19.367817   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:55:19.367823   57466 kubeadm.go:309] 
	I0708 20:55:19.367851   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:55:19.367888   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:55:19.367991   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:55:19.368009   57466 kubeadm.go:309] 
	I0708 20:55:19.368127   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:55:19.368164   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:55:19.368192   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:55:19.368198   57466 kubeadm.go:309] 
	I0708 20:55:19.368284   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:55:19.368355   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:55:19.368362   57466 kubeadm.go:309] 
	I0708 20:55:19.368455   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:55:19.368539   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:55:19.368606   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:55:19.368666   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:55:19.368673   57466 kubeadm.go:309] 
	W0708 20:55:19.368784   57466 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0708 20:55:19.368831   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 20:55:19.838778   57466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:55:19.853958   57466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:55:19.863986   57466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:55:19.864010   57466 kubeadm.go:156] found existing configuration files:
	
	I0708 20:55:19.864055   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:55:19.873085   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:55:19.873147   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:55:19.882654   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:55:19.891579   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:55:19.891634   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:55:19.901397   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.910901   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:55:19.910976   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.920599   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:55:19.929826   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:55:19.929891   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:55:19.939284   57466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 20:55:20.153136   57466 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 20:57:16.353120   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:57:16.353203   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:57:16.355269   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:57:16.355317   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:57:16.355404   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:57:16.355558   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:57:16.355708   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:57:16.355815   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:57:16.358151   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:57:16.358312   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:57:16.358411   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:57:16.358531   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:57:16.358641   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:57:16.358732   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:57:16.358798   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:57:16.358893   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:57:16.359004   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:57:16.359128   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:57:16.359209   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:57:16.359288   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:57:16.359384   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:57:16.359509   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:57:16.359614   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:57:16.359725   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:57:16.359794   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:57:16.359881   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:57:16.359963   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:57:16.360002   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:57:16.360099   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:57:16.361960   57466 out.go:204]   - Booting up control plane ...
	I0708 20:57:16.362053   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:57:16.362196   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:57:16.362283   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:57:16.362402   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:57:16.362589   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:57:16.362819   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:57:16.362930   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363170   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363242   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363473   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363580   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363786   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363873   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364093   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364247   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364435   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364445   57466 kubeadm.go:309] 
	I0708 20:57:16.364476   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:57:16.364533   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:57:16.364541   57466 kubeadm.go:309] 
	I0708 20:57:16.364601   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:57:16.364636   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:57:16.364796   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:57:16.364820   57466 kubeadm.go:309] 
	I0708 20:57:16.364958   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:57:16.365016   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:57:16.365057   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:57:16.365063   57466 kubeadm.go:309] 
	I0708 20:57:16.365208   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:57:16.365339   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:57:16.365356   57466 kubeadm.go:309] 
	I0708 20:57:16.365490   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:57:16.365589   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:57:16.365694   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:57:16.365869   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:57:16.365969   57466 kubeadm.go:309] 
	I0708 20:57:16.365972   57466 kubeadm.go:393] duration metric: took 7m56.670441698s to StartCluster
	I0708 20:57:16.366023   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:57:16.366090   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:57:16.435868   57466 cri.go:89] found id: ""
	I0708 20:57:16.435896   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.435904   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:57:16.435910   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:57:16.435969   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:57:16.478844   57466 cri.go:89] found id: ""
	I0708 20:57:16.478881   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.478896   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:57:16.478904   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:57:16.478974   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:57:16.517414   57466 cri.go:89] found id: ""
	I0708 20:57:16.517439   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.517448   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:57:16.517455   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:57:16.517516   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:57:16.557036   57466 cri.go:89] found id: ""
	I0708 20:57:16.557063   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.557074   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:57:16.557081   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:57:16.557153   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:57:16.593604   57466 cri.go:89] found id: ""
	I0708 20:57:16.593631   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.593641   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:57:16.593648   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:57:16.593704   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:57:16.634143   57466 cri.go:89] found id: ""
	I0708 20:57:16.634173   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.634183   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:57:16.634190   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:57:16.634248   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:57:16.676553   57466 cri.go:89] found id: ""
	I0708 20:57:16.676585   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.676595   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:57:16.676602   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:57:16.676663   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:57:16.715652   57466 cri.go:89] found id: ""
	I0708 20:57:16.715674   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.715682   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:57:16.715692   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:57:16.715703   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:57:16.730747   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:57:16.730776   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:57:16.814950   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:57:16.814976   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:57:16.815005   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:57:16.921144   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:57:16.921194   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:57:16.973261   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:57:16.973294   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 20:57:17.031242   57466 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0708 20:57:17.031307   57466 out.go:239] * 
	* 
	W0708 20:57:17.031362   57466 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.031389   57466 out.go:239] * 
	* 
	W0708 20:57:17.032214   57466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:57:17.035847   57466 out.go:177] 
	W0708 20:57:17.037198   57466 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.037247   57466 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0708 20:57:17.037274   57466 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0708 20:57:17.039077   57466 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-914355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 2 (243.285693ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-914355 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-897827                                        | pause-897827                 | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:46 UTC |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| ssh     | cert-options-059722 ssh                                | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-059722 -- sudo                         | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-059722                                 | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-028021             | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-914355             | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-239931            | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-733920 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-733920                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:50 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028021                  | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071971  | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-239931                 | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071971       | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC |                     |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:53:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:53:37.291760   59655 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:53:37.291847   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.291851   59655 out.go:304] Setting ErrFile to fd 2...
	I0708 20:53:37.291855   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.292047   59655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:53:37.292558   59655 out.go:298] Setting JSON to false
	I0708 20:53:37.293434   59655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5766,"bootTime":1720466251,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:53:37.293485   59655 start.go:139] virtualization: kvm guest
	I0708 20:53:37.296412   59655 out.go:177] * [default-k8s-diff-port-071971] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:53:37.297727   59655 notify.go:220] Checking for updates...
	I0708 20:53:37.297756   59655 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:53:37.299168   59655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:53:37.300541   59655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:53:37.301818   59655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:53:37.303117   59655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:53:37.304266   59655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:53:37.305793   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:53:37.306182   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.306236   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.321719   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I0708 20:53:37.322090   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.322593   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.322617   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.322908   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.323093   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.323329   59655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:53:37.323638   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.323679   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.338244   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0708 20:53:37.338660   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.339118   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.339144   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.339463   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.339735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.374356   59655 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:53:37.375714   59655 start.go:297] selected driver: kvm2
	I0708 20:53:37.375729   59655 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.375866   59655 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:53:37.376843   59655 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.376918   59655 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:53:37.391219   59655 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:53:37.391602   59655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:53:37.391659   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:53:37.391672   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:53:37.391707   59655 start.go:340] cluster config:
	{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.391797   59655 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.393453   59655 out.go:177] * Starting "default-k8s-diff-port-071971" primary control-plane node in "default-k8s-diff-port-071971" cluster
	I0708 20:53:37.923695   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:40.995762   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:37.394736   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:53:37.394768   59655 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 20:53:37.394777   59655 cache.go:56] Caching tarball of preloaded images
	I0708 20:53:37.394849   59655 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:53:37.394860   59655 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 20:53:37.394962   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:53:37.395154   59655 start.go:360] acquireMachinesLock for default-k8s-diff-port-071971: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:53:47.075721   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:50.147727   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:56.227766   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:59.299738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:05.379699   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:08.451749   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:14.531759   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:17.603688   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:23.683730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:26.755738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:32.835706   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:35.907702   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:41.987722   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:45.059873   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:51.139726   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:54.211797   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:00.291730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:03.363720   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:09.443741   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:12.515718   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.358315   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:55:19.358408   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:55:19.359948   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:55:19.360000   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:55:19.360076   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:55:19.360217   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:55:19.360354   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:55:19.360443   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:55:19.362594   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:55:19.362671   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:55:19.362761   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:55:19.362915   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:55:19.362997   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:55:19.363087   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:55:19.363181   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:55:19.363271   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:55:19.363360   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:55:19.363470   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:55:19.363582   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:55:19.363636   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:55:19.363711   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:55:19.363781   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:55:19.363852   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:55:19.363941   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:55:19.364010   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:55:19.364135   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:55:19.364226   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:55:19.364276   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:55:19.364342   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:55:18.595786   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.366132   57466 out.go:204]   - Booting up control plane ...
	I0708 20:55:19.366219   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:55:19.366301   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:55:19.366364   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:55:19.366433   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:55:19.366579   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:55:19.366629   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:55:19.366692   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.366846   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.366909   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367070   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367133   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367285   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367344   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367511   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367575   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367735   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367743   57466 kubeadm.go:309] 
	I0708 20:55:19.367783   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:55:19.367817   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:55:19.367823   57466 kubeadm.go:309] 
	I0708 20:55:19.367851   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:55:19.367888   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:55:19.367991   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:55:19.368009   57466 kubeadm.go:309] 
	I0708 20:55:19.368127   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:55:19.368164   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:55:19.368192   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:55:19.368198   57466 kubeadm.go:309] 
	I0708 20:55:19.368284   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:55:19.368355   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:55:19.368362   57466 kubeadm.go:309] 
	I0708 20:55:19.368455   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:55:19.368539   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:55:19.368606   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:55:19.368666   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:55:19.368673   57466 kubeadm.go:309] 
	W0708 20:55:19.368784   57466 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0708 20:55:19.368831   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 20:55:19.838778   57466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:55:19.853958   57466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:55:19.863986   57466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:55:19.864010   57466 kubeadm.go:156] found existing configuration files:
	
	I0708 20:55:19.864055   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:55:19.873085   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:55:19.873147   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:55:19.882654   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:55:19.891579   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:55:19.891634   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:55:19.901397   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.910901   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:55:19.910976   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.920599   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:55:19.929826   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:55:19.929891   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:55:19.939284   57466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 20:55:20.153136   57466 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 20:55:21.667700   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:27.747756   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:30.819712   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:33.824320   59107 start.go:364] duration metric: took 3m48.54985296s to acquireMachinesLock for "embed-certs-239931"
	I0708 20:55:33.824375   59107 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:33.824390   59107 fix.go:54] fixHost starting: 
	I0708 20:55:33.824700   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:33.824728   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:33.839554   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0708 20:55:33.839987   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:33.840472   59107 main.go:141] libmachine: Using API Version  1
	I0708 20:55:33.840495   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:33.840844   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:33.841030   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:33.841194   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 20:55:33.842597   59107 fix.go:112] recreateIfNeeded on embed-certs-239931: state=Stopped err=<nil>
	I0708 20:55:33.842627   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	W0708 20:55:33.842787   59107 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:33.844574   59107 out.go:177] * Restarting existing kvm2 VM for "embed-certs-239931" ...
	I0708 20:55:33.845674   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Start
	I0708 20:55:33.845858   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring networks are active...
	I0708 20:55:33.846607   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network default is active
	I0708 20:55:33.846907   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network mk-embed-certs-239931 is active
	I0708 20:55:33.847329   59107 main.go:141] libmachine: (embed-certs-239931) Getting domain xml...
	I0708 20:55:33.848120   59107 main.go:141] libmachine: (embed-certs-239931) Creating domain...
	I0708 20:55:35.057523   59107 main.go:141] libmachine: (embed-certs-239931) Waiting to get IP...
	I0708 20:55:35.058300   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.058841   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.058870   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.058773   60083 retry.go:31] will retry after 280.969113ms: waiting for machine to come up
	I0708 20:55:33.821580   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:33.821617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.821932   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:55:33.821957   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.822166   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:55:33.824193   58678 machine.go:97] duration metric: took 4m37.421469682s to provisionDockerMachine
	I0708 20:55:33.824234   58678 fix.go:56] duration metric: took 4m37.444794791s for fixHost
	I0708 20:55:33.824241   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 4m37.44481517s
	W0708 20:55:33.824262   58678 start.go:713] error starting host: provision: host is not running
	W0708 20:55:33.824343   58678 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0708 20:55:33.824352   58678 start.go:728] Will try again in 5 seconds ...
	I0708 20:55:35.341327   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.341861   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.341882   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.341837   60083 retry.go:31] will retry after 333.972717ms: waiting for machine to come up
	I0708 20:55:35.677531   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.678035   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.678066   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.677984   60083 retry.go:31] will retry after 387.46652ms: waiting for machine to come up
	I0708 20:55:36.066618   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.067079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.067106   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.067044   60083 retry.go:31] will retry after 523.369614ms: waiting for machine to come up
	I0708 20:55:36.591863   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.592337   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.592363   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.592295   60083 retry.go:31] will retry after 670.675561ms: waiting for machine to come up
	I0708 20:55:37.264084   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:37.264521   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:37.264565   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:37.264485   60083 retry.go:31] will retry after 775.348922ms: waiting for machine to come up
	I0708 20:55:38.041398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:38.041860   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:38.041885   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:38.041801   60083 retry.go:31] will retry after 1.135585711s: waiting for machine to come up
	I0708 20:55:39.179405   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:39.179910   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:39.179938   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:39.179867   60083 retry.go:31] will retry after 1.422689354s: waiting for machine to come up
	I0708 20:55:38.826037   58678 start.go:360] acquireMachinesLock for no-preload-028021: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:55:40.603811   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:40.604240   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:40.604261   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:40.604199   60083 retry.go:31] will retry after 1.640612147s: waiting for machine to come up
	I0708 20:55:42.247230   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:42.247797   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:42.247837   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:42.247733   60083 retry.go:31] will retry after 2.031069729s: waiting for machine to come up
	I0708 20:55:44.280878   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:44.281419   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:44.281451   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:44.281355   60083 retry.go:31] will retry after 2.394813785s: waiting for machine to come up
	I0708 20:55:46.678897   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:46.679398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:46.679430   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:46.679357   60083 retry.go:31] will retry after 2.419242459s: waiting for machine to come up
	I0708 20:55:49.100362   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:49.100901   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:49.100964   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:49.100858   60083 retry.go:31] will retry after 4.241202363s: waiting for machine to come up
	I0708 20:55:54.868873   59655 start.go:364] duration metric: took 2m17.473689428s to acquireMachinesLock for "default-k8s-diff-port-071971"
	I0708 20:55:54.868970   59655 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:54.868991   59655 fix.go:54] fixHost starting: 
	I0708 20:55:54.869400   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:54.869432   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:54.888853   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44159
	I0708 20:55:54.889234   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:54.889674   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:55:54.889698   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:54.890009   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:54.890196   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:55:54.890332   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 20:55:54.891932   59655 fix.go:112] recreateIfNeeded on default-k8s-diff-port-071971: state=Stopped err=<nil>
	I0708 20:55:54.891972   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	W0708 20:55:54.892120   59655 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:54.894439   59655 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071971" ...
	I0708 20:55:53.347154   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.347587   59107 main.go:141] libmachine: (embed-certs-239931) Found IP for machine: 192.168.61.126
	I0708 20:55:53.347601   59107 main.go:141] libmachine: (embed-certs-239931) Reserving static IP address...
	I0708 20:55:53.347612   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has current primary IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.348084   59107 main.go:141] libmachine: (embed-certs-239931) Reserved static IP address: 192.168.61.126
	I0708 20:55:53.348106   59107 main.go:141] libmachine: (embed-certs-239931) Waiting for SSH to be available...
	I0708 20:55:53.348119   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.348136   59107 main.go:141] libmachine: (embed-certs-239931) DBG | skip adding static IP to network mk-embed-certs-239931 - found existing host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"}
	I0708 20:55:53.348148   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Getting to WaitForSSH function...
	I0708 20:55:53.350167   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350545   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.350583   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350651   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH client type: external
	I0708 20:55:53.350675   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa (-rw-------)
	I0708 20:55:53.350704   59107 main.go:141] libmachine: (embed-certs-239931) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:55:53.350722   59107 main.go:141] libmachine: (embed-certs-239931) DBG | About to run SSH command:
	I0708 20:55:53.350736   59107 main.go:141] libmachine: (embed-certs-239931) DBG | exit 0
	I0708 20:55:53.479934   59107 main.go:141] libmachine: (embed-certs-239931) DBG | SSH cmd err, output: <nil>: 
	I0708 20:55:53.480309   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetConfigRaw
	I0708 20:55:53.480891   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.483079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483399   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.483424   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483740   59107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/config.json ...
	I0708 20:55:53.483920   59107 machine.go:94] provisionDockerMachine start ...
	I0708 20:55:53.483937   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:53.484172   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.486461   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486772   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.486793   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.487075   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487253   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487385   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.487556   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.487774   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.487786   59107 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:55:53.600047   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:55:53.600078   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600308   59107 buildroot.go:166] provisioning hostname "embed-certs-239931"
	I0708 20:55:53.600342   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600508   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.603107   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603498   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.603529   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603728   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.603954   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604345   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.604512   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.604716   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.604737   59107 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-239931 && echo "embed-certs-239931" | sudo tee /etc/hostname
	I0708 20:55:53.734414   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-239931
	
	I0708 20:55:53.734457   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.737117   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737473   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.737501   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737640   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.737852   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738184   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.738355   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.738536   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.738558   59107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-239931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-239931/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-239931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:55:53.860753   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:53.860781   59107 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:55:53.860799   59107 buildroot.go:174] setting up certificates
	I0708 20:55:53.860808   59107 provision.go:84] configureAuth start
	I0708 20:55:53.860816   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.861070   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.863652   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.863999   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.864018   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.864221   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.866207   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866480   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.866504   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866613   59107 provision.go:143] copyHostCerts
	I0708 20:55:53.866671   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:55:53.866680   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:55:53.866741   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:55:53.866837   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:55:53.866845   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:55:53.866868   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:55:53.866932   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:55:53.866939   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:55:53.866959   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:55:53.867017   59107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.embed-certs-239931 san=[127.0.0.1 192.168.61.126 embed-certs-239931 localhost minikube]
	I0708 20:55:54.171765   59107 provision.go:177] copyRemoteCerts
	I0708 20:55:54.171835   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:55:54.171859   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.174341   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174621   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.174650   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174762   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.174957   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.175129   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.175280   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.262051   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:55:54.287118   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:55:54.310071   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:55:54.337811   59107 provision.go:87] duration metric: took 476.990356ms to configureAuth
	I0708 20:55:54.337851   59107 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:55:54.338077   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:55:54.338147   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.340972   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341259   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.341296   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341423   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.341720   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.341870   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.342006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.342147   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.342332   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.342350   59107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:55:54.618752   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:55:54.618775   59107 machine.go:97] duration metric: took 1.134844127s to provisionDockerMachine
	I0708 20:55:54.618786   59107 start.go:293] postStartSetup for "embed-certs-239931" (driver="kvm2")
	I0708 20:55:54.618795   59107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:55:54.618823   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.619220   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:55:54.619249   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.621857   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622144   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.622168   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622348   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.622532   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.622703   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.622853   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.710096   59107 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:55:54.714437   59107 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:55:54.714458   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:55:54.714524   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:55:54.714597   59107 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:55:54.714679   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:55:54.724350   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:55:54.748078   59107 start.go:296] duration metric: took 129.264358ms for postStartSetup
	I0708 20:55:54.748120   59107 fix.go:56] duration metric: took 20.923736253s for fixHost
	I0708 20:55:54.748138   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.750818   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751200   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.751227   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751377   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.751611   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751763   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751879   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.752034   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.752240   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.752256   59107 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:55:54.868663   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472154.844724958
	
	I0708 20:55:54.868694   59107 fix.go:216] guest clock: 1720472154.844724958
	I0708 20:55:54.868706   59107 fix.go:229] Guest: 2024-07-08 20:55:54.844724958 +0000 UTC Remote: 2024-07-08 20:55:54.748123056 +0000 UTC m=+249.617599643 (delta=96.601902ms)
	I0708 20:55:54.868764   59107 fix.go:200] guest clock delta is within tolerance: 96.601902ms
	I0708 20:55:54.868776   59107 start.go:83] releasing machines lock for "embed-certs-239931", held for 21.044425411s
	I0708 20:55:54.868811   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.869092   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:54.871867   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872252   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.872295   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872451   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.872921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873060   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873151   59107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:55:54.873196   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.873271   59107 ssh_runner.go:195] Run: cat /version.json
	I0708 20:55:54.873297   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.876118   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876142   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876614   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876641   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876682   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876699   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.876903   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.877017   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877193   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877266   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877349   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.877424   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.984516   59107 ssh_runner.go:195] Run: systemctl --version
	I0708 20:55:54.990926   59107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:55:55.142010   59107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:55:55.148138   59107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:55:55.148203   59107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:55:55.164086   59107 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:55:55.164111   59107 start.go:494] detecting cgroup driver to use...
	I0708 20:55:55.164204   59107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:55:55.184836   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:55:55.204002   59107 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:55:55.204079   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:55:55.218405   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:55:55.233462   59107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:55:55.357278   59107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:55:55.521141   59107 docker.go:233] disabling docker service ...
	I0708 20:55:55.521218   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:55:55.538949   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:55:55.558613   59107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:55:55.696926   59107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:55:55.819810   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:55:55.837012   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:55:55.856417   59107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:55:55.856497   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.868488   59107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:55:55.868556   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.879503   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.891183   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.901872   59107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:55:55.914498   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.925676   59107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.944340   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.955961   59107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:55:55.965785   59107 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:55:55.965845   59107 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:55:55.979853   59107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:55:55.989331   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:55:56.108476   59107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:55:56.262396   59107 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:55:56.262463   59107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:55:56.267682   59107 start.go:562] Will wait 60s for crictl version
	I0708 20:55:56.267740   59107 ssh_runner.go:195] Run: which crictl
	I0708 20:55:56.273115   59107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:55:56.323276   59107 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:55:56.323364   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.352650   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.394502   59107 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:55:54.895951   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Start
	I0708 20:55:54.896150   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring networks are active...
	I0708 20:55:54.896971   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network default is active
	I0708 20:55:54.897344   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network mk-default-k8s-diff-port-071971 is active
	I0708 20:55:54.897672   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Getting domain xml...
	I0708 20:55:54.898340   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Creating domain...
	I0708 20:55:56.182187   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting to get IP...
	I0708 20:55:56.183209   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183699   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183759   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.183663   60221 retry.go:31] will retry after 255.382138ms: waiting for machine to come up
	I0708 20:55:56.441290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441760   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441789   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.441718   60221 retry.go:31] will retry after 363.116234ms: waiting for machine to come up
	I0708 20:55:56.806418   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806871   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806899   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.806819   60221 retry.go:31] will retry after 392.319836ms: waiting for machine to come up
	I0708 20:55:57.200645   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201176   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.201095   60221 retry.go:31] will retry after 528.490844ms: waiting for machine to come up
	I0708 20:55:56.395778   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:56.398458   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.398826   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:56.398853   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.399088   59107 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0708 20:55:56.403789   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:55:56.418081   59107 kubeadm.go:877] updating cluster {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:55:56.418244   59107 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:55:56.418312   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:55:56.459969   59107 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:55:56.460034   59107 ssh_runner.go:195] Run: which lz4
	I0708 20:55:56.464561   59107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:55:56.469087   59107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:55:56.469130   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:55:58.010716   59107 crio.go:462] duration metric: took 1.546186223s to copy over tarball
	I0708 20:55:58.010782   59107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:55:57.731640   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732172   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732223   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.732129   60221 retry.go:31] will retry after 554.611559ms: waiting for machine to come up
	I0708 20:55:58.287924   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288557   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.288491   60221 retry.go:31] will retry after 642.466107ms: waiting for machine to come up
	I0708 20:55:58.932485   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933032   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.932958   60221 retry.go:31] will retry after 999.83146ms: waiting for machine to come up
	I0708 20:55:59.934050   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934618   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934664   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:59.934571   60221 retry.go:31] will retry after 1.069868254s: waiting for machine to come up
	I0708 20:56:01.006049   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006594   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:01.006506   60221 retry.go:31] will retry after 1.182777891s: waiting for machine to come up
	I0708 20:56:02.191001   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191460   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191483   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:02.191418   60221 retry.go:31] will retry after 1.559742627s: waiting for machine to come up
	I0708 20:56:00.267199   59107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256392679s)
	I0708 20:56:00.267230   59107 crio.go:469] duration metric: took 2.256489175s to extract the tarball
	I0708 20:56:00.267240   59107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:00.305692   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:00.346669   59107 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:00.346694   59107 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:00.346703   59107 kubeadm.go:928] updating node { 192.168.61.126 8443 v1.30.2 crio true true} ...
	I0708 20:56:00.346804   59107 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-239931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:00.346868   59107 ssh_runner.go:195] Run: crio config
	I0708 20:56:00.392577   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:00.392597   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:00.392608   59107 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:00.392637   59107 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-239931 NodeName:embed-certs-239931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:00.392814   59107 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-239931"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:00.392894   59107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:00.403593   59107 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:00.403675   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:00.413449   59107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0708 20:56:00.430407   59107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:00.447599   59107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0708 20:56:00.465525   59107 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:00.469912   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:00.483255   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:00.623802   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:00.642946   59107 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931 for IP: 192.168.61.126
	I0708 20:56:00.642967   59107 certs.go:194] generating shared ca certs ...
	I0708 20:56:00.642982   59107 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:00.643143   59107 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:00.643184   59107 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:00.643193   59107 certs.go:256] generating profile certs ...
	I0708 20:56:00.643270   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/client.key
	I0708 20:56:00.643317   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key.7743ab88
	I0708 20:56:00.643354   59107 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key
	I0708 20:56:00.643487   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:00.643524   59107 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:00.643533   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:00.643556   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:00.643579   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:00.643604   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:00.643670   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:00.644353   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:00.699260   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:00.752536   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:00.783946   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:00.812524   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0708 20:56:00.843035   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:00.872061   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:00.898805   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 20:56:00.925402   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:00.952114   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:00.984067   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:01.010037   59107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:01.027599   59107 ssh_runner.go:195] Run: openssl version
	I0708 20:56:01.033942   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:01.046273   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051807   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051887   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.058482   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:01.070774   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:01.083215   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088389   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088460   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.094594   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:01.107360   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:01.119973   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125011   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125085   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.131596   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:01.143993   59107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:01.149299   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:01.156201   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:01.162939   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:01.169874   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:01.176264   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:01.182905   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:01.189961   59107 kubeadm.go:391] StartCluster: {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:01.190041   59107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:01.190085   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.238097   59107 cri.go:89] found id: ""
	I0708 20:56:01.238167   59107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:01.250478   59107 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:01.250503   59107 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:01.250509   59107 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:01.250562   59107 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:01.261664   59107 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:01.262667   59107 kubeconfig.go:125] found "embed-certs-239931" server: "https://192.168.61.126:8443"
	I0708 20:56:01.264788   59107 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:01.275846   59107 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.126
	I0708 20:56:01.275888   59107 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:01.275908   59107 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:01.276006   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.318646   59107 cri.go:89] found id: ""
	I0708 20:56:01.318745   59107 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:01.340273   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:01.353325   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:01.353360   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:01.353412   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:01.363659   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:01.363732   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:01.374340   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:01.384284   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:01.384352   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:01.394981   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.405532   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:01.405604   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.416741   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:01.427724   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:01.427812   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:01.439352   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:01.451286   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:01.581829   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.013995   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.432133224s)
	I0708 20:56:03.014024   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.229195   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.305328   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.415409   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:03.415508   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:03.916187   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.416389   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.489450   59107 api_server.go:72] duration metric: took 1.074041899s to wait for apiserver process to appear ...
	I0708 20:56:04.489482   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:04.489516   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:04.490133   59107 api_server.go:269] stopped: https://192.168.61.126:8443/healthz: Get "https://192.168.61.126:8443/healthz": dial tcp 192.168.61.126:8443: connect: connection refused
	I0708 20:56:04.989698   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:03.753446   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.753998   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.754026   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:03.753954   60221 retry.go:31] will retry after 1.922949894s: waiting for machine to come up
	I0708 20:56:05.679244   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679831   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679860   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:05.679794   60221 retry.go:31] will retry after 3.531627288s: waiting for machine to come up
	I0708 20:56:07.673375   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:07.673404   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:07.673420   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.776516   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.776551   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:07.989668   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.996865   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.996897   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.490538   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:08.496342   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:08.496374   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.990583   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:09.001043   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 20:56:09.011126   59107 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:09.011160   59107 api_server.go:131] duration metric: took 4.521668725s to wait for apiserver health ...
	I0708 20:56:09.011171   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:09.011179   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:09.012842   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:09.014197   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:09.041325   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:09.073110   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:09.086225   59107 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:09.086265   59107 system_pods.go:61] "coredns-7db6d8ff4d-wnqsl" [868e66bf-9f86-465f-aad1-d11a6d218ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:09.086276   59107 system_pods.go:61] "etcd-embed-certs-239931" [48815314-6e48-4fe0-b7b1-4a1d2f6610d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:09.086286   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [665311f4-d633-4b93-ae8c-2b43b45fff68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:09.086294   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [4ab6d657-8c74-491c-b965-ac68f2bd323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:09.086309   59107 system_pods.go:61] "kube-proxy-5h5xl" [9b169148-aa75-40a2-b08b-1d579ee179b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 20:56:09.086316   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [012399d8-10a4-407d-a899-3c840dd52ca8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:09.086331   59107 system_pods.go:61] "metrics-server-569cc877fc-h4btg" [c78cfc3c-159f-4a06-b4a0-63f8bd0a6703] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:09.086339   59107 system_pods.go:61] "storage-provisioner" [2ca0ea1d-5d1c-4e18-a871-e035a8946b3c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 20:56:09.086348   59107 system_pods.go:74] duration metric: took 13.216051ms to wait for pod list to return data ...
	I0708 20:56:09.086363   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:09.089689   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:09.089719   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:09.089732   59107 node_conditions.go:105] duration metric: took 3.363611ms to run NodePressure ...
	I0708 20:56:09.089751   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:09.377271   59107 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383148   59107 kubeadm.go:733] kubelet initialised
	I0708 20:56:09.383174   59107 kubeadm.go:734] duration metric: took 5.876526ms waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383183   59107 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:09.388903   59107 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:09.214856   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215410   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:09.215355   60221 retry.go:31] will retry after 3.64169465s: waiting for machine to come up
	I0708 20:56:14.180834   58678 start.go:364] duration metric: took 35.354748041s to acquireMachinesLock for "no-preload-028021"
	I0708 20:56:14.180893   58678 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:56:14.180905   58678 fix.go:54] fixHost starting: 
	I0708 20:56:14.181259   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:56:14.181299   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:56:14.197712   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I0708 20:56:14.198157   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:56:14.198615   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:56:14.198637   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:56:14.198996   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:56:14.199173   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:14.199342   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:56:14.200905   58678 fix.go:112] recreateIfNeeded on no-preload-028021: state=Stopped err=<nil>
	I0708 20:56:14.200930   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	W0708 20:56:14.201103   58678 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:56:14.203062   58678 out.go:177] * Restarting existing kvm2 VM for "no-preload-028021" ...
	I0708 20:56:11.396453   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:13.396672   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:12.860535   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.860988   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Found IP for machine: 192.168.72.163
	I0708 20:56:12.861010   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserving static IP address...
	I0708 20:56:12.861027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has current primary IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.861445   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.861473   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserved static IP address: 192.168.72.163
	I0708 20:56:12.861494   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | skip adding static IP to network mk-default-k8s-diff-port-071971 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"}
	I0708 20:56:12.861515   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Getting to WaitForSSH function...
	I0708 20:56:12.861531   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for SSH to be available...
	I0708 20:56:12.864099   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864436   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.864465   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864631   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH client type: external
	I0708 20:56:12.864663   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa (-rw-------)
	I0708 20:56:12.864693   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:12.864708   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | About to run SSH command:
	I0708 20:56:12.864721   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | exit 0
	I0708 20:56:12.996077   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:12.996459   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetConfigRaw
	I0708 20:56:12.997091   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:12.999431   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.999815   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.999844   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.000145   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:56:13.000354   59655 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:13.000377   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.000558   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.002898   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003255   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.003290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003444   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.003626   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003778   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003930   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.004094   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.004297   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.004311   59655 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:13.119929   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:13.119956   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120203   59655 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071971"
	I0708 20:56:13.120320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120544   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.123750   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.124256   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.124647   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124993   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.125155   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.125339   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.125360   59655 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071971 && echo "default-k8s-diff-port-071971" | sudo tee /etc/hostname
	I0708 20:56:13.256165   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071971
	
	I0708 20:56:13.256199   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.258991   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259342   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.259376   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.259828   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260011   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260149   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.260325   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.260506   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.260530   59655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071971/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:13.381593   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:13.381627   59655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:13.381684   59655 buildroot.go:174] setting up certificates
	I0708 20:56:13.381700   59655 provision.go:84] configureAuth start
	I0708 20:56:13.381716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.382023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:13.385065   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385358   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.385394   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385566   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.387752   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388092   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.388132   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388290   59655 provision.go:143] copyHostCerts
	I0708 20:56:13.388350   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:13.388361   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:13.388408   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:13.388506   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:13.388516   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:13.388536   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:13.388587   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:13.388593   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:13.388610   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:13.389123   59655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071971 san=[127.0.0.1 192.168.72.163 default-k8s-diff-port-071971 localhost minikube]
	I0708 20:56:13.445451   59655 provision.go:177] copyRemoteCerts
	I0708 20:56:13.445509   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:13.445536   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.448926   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449291   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.449320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449579   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.449785   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.449944   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.450097   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:13.542311   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0708 20:56:13.570585   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 20:56:13.597943   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:13.623837   59655 provision.go:87] duration metric: took 242.102893ms to configureAuth
	I0708 20:56:13.623874   59655 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:13.624077   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:13.624144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.626802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627247   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.627277   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627553   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.627738   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.627910   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.628047   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.628214   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.628414   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.628442   59655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:13.930321   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:13.930349   59655 machine.go:97] duration metric: took 929.979999ms to provisionDockerMachine
	I0708 20:56:13.930361   59655 start.go:293] postStartSetup for "default-k8s-diff-port-071971" (driver="kvm2")
	I0708 20:56:13.930371   59655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:13.930385   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.930714   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:13.930747   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.933397   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933704   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.933735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933927   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.934119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.934266   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.934393   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.019603   59655 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:14.024556   59655 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:14.024589   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:14.024651   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:14.024744   59655 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:14.024836   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:14.035798   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:14.062351   59655 start.go:296] duration metric: took 131.974167ms for postStartSetup
	I0708 20:56:14.062402   59655 fix.go:56] duration metric: took 19.193418124s for fixHost
	I0708 20:56:14.062428   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.065264   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.065784   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.065822   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.066027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.066271   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.066965   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:14.067197   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:14.067210   59655 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:14.180654   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472174.151879540
	
	I0708 20:56:14.180683   59655 fix.go:216] guest clock: 1720472174.151879540
	I0708 20:56:14.180695   59655 fix.go:229] Guest: 2024-07-08 20:56:14.15187954 +0000 UTC Remote: 2024-07-08 20:56:14.062408788 +0000 UTC m=+156.804206336 (delta=89.470752ms)
	I0708 20:56:14.180751   59655 fix.go:200] guest clock delta is within tolerance: 89.470752ms
	I0708 20:56:14.180757   59655 start.go:83] releasing machines lock for "default-k8s-diff-port-071971", held for 19.311816598s
	I0708 20:56:14.180802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.181119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:14.183833   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184164   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.184194   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184365   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.184862   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185029   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185105   59655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:14.185152   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.185222   59655 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:14.185248   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.187788   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188135   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188167   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188299   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188328   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188501   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188505   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188641   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.188715   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188803   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.188885   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.189022   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.298253   59655 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:14.305004   59655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:14.457540   59655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:14.464497   59655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:14.464567   59655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:14.482063   59655 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:14.482093   59655 start.go:494] detecting cgroup driver to use...
	I0708 20:56:14.482172   59655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:14.500206   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:14.515905   59655 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:14.515952   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:14.532277   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:14.552772   59655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:14.686229   59655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:14.845428   59655 docker.go:233] disabling docker service ...
	I0708 20:56:14.845496   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:14.863157   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:14.881174   59655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:15.029269   59655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:15.165105   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:15.181619   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:15.202743   59655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:15.202806   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.215848   59655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:15.215925   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.228697   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.240964   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.257002   59655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:15.270309   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.283215   59655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.303235   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.322364   59655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:15.340757   59655 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:15.340836   59655 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:15.360592   59655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:15.372486   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:15.510210   59655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:15.656090   59655 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:15.656169   59655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:15.661847   59655 start.go:562] Will wait 60s for crictl version
	I0708 20:56:15.661917   59655 ssh_runner.go:195] Run: which crictl
	I0708 20:56:15.666004   59655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:15.707842   59655 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:15.707928   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.740434   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.772450   59655 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:14.204596   58678 main.go:141] libmachine: (no-preload-028021) Calling .Start
	I0708 20:56:14.204780   58678 main.go:141] libmachine: (no-preload-028021) Ensuring networks are active...
	I0708 20:56:14.205463   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network default is active
	I0708 20:56:14.205799   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network mk-no-preload-028021 is active
	I0708 20:56:14.206280   58678 main.go:141] libmachine: (no-preload-028021) Getting domain xml...
	I0708 20:56:14.207187   58678 main.go:141] libmachine: (no-preload-028021) Creating domain...
	I0708 20:56:15.514100   58678 main.go:141] libmachine: (no-preload-028021) Waiting to get IP...
	I0708 20:56:15.514946   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.515419   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.515473   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.515397   60369 retry.go:31] will retry after 282.59763ms: waiting for machine to come up
	I0708 20:56:15.799976   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.800525   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.800555   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.800482   60369 retry.go:31] will retry after 377.094067ms: waiting for machine to come up
	I0708 20:56:16.179257   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.179953   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.179979   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.179861   60369 retry.go:31] will retry after 433.953923ms: waiting for machine to come up
	I0708 20:56:15.773711   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:15.776947   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777368   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:15.777400   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777704   59655 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:15.782466   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:15.796924   59655 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:15.797072   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:15.797138   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:15.841838   59655 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:15.841922   59655 ssh_runner.go:195] Run: which lz4
	I0708 20:56:15.846443   59655 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:56:15.851267   59655 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:56:15.851302   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:56:15.397039   59107 pod_ready.go:92] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:15.397070   59107 pod_ready.go:81] duration metric: took 6.008141421s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:15.397082   59107 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405606   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.405638   59107 pod_ready.go:81] duration metric: took 2.008547358s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405653   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411786   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.411810   59107 pod_ready.go:81] duration metric: took 6.14625ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411822   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421681   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.421712   59107 pod_ready.go:81] duration metric: took 2.009879259s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421725   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428235   59107 pod_ready.go:92] pod "kube-proxy-5h5xl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.428260   59107 pod_ready.go:81] duration metric: took 6.527896ms for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428269   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433130   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.433154   59107 pod_ready.go:81] duration metric: took 4.87807ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433163   59107 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:16.615670   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.616225   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.616257   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.616177   60369 retry.go:31] will retry after 489.658115ms: waiting for machine to come up
	I0708 20:56:17.107848   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.108391   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.108420   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.108341   60369 retry.go:31] will retry after 620.239043ms: waiting for machine to come up
	I0708 20:56:17.730239   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.730822   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.730854   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.730758   60369 retry.go:31] will retry after 818.379867ms: waiting for machine to come up
	I0708 20:56:18.550539   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:18.551024   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:18.551049   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:18.550993   60369 retry.go:31] will retry after 1.138596155s: waiting for machine to come up
	I0708 20:56:19.691669   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:19.692214   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:19.692267   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:19.692149   60369 retry.go:31] will retry after 1.467771065s: waiting for machine to come up
	I0708 20:56:21.161367   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:21.161916   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:21.161945   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:21.161854   60369 retry.go:31] will retry after 1.592022559s: waiting for machine to come up
	I0708 20:56:17.447251   59655 crio.go:462] duration metric: took 1.600850063s to copy over tarball
	I0708 20:56:17.447341   59655 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:56:19.773249   59655 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.325874804s)
	I0708 20:56:19.773277   59655 crio.go:469] duration metric: took 2.325993304s to extract the tarball
	I0708 20:56:19.773286   59655 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:19.811911   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:19.859029   59655 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:19.859060   59655 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:19.859070   59655 kubeadm.go:928] updating node { 192.168.72.163 8444 v1.30.2 crio true true} ...
	I0708 20:56:19.859208   59655 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-071971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:19.859281   59655 ssh_runner.go:195] Run: crio config
	I0708 20:56:19.905778   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:19.905806   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:19.905822   59655 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:19.905847   59655 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.163 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071971 NodeName:default-k8s-diff-port-071971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:19.906035   59655 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.163
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071971"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:19.906113   59655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:19.916307   59655 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:19.916388   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:19.926496   59655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0708 20:56:19.947778   59655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:19.969466   59655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0708 20:56:19.991103   59655 ssh_runner.go:195] Run: grep 192.168.72.163	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:19.995180   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:20.008005   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:20.143869   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:20.162694   59655 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971 for IP: 192.168.72.163
	I0708 20:56:20.162713   59655 certs.go:194] generating shared ca certs ...
	I0708 20:56:20.162745   59655 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:20.162930   59655 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:20.162986   59655 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:20.162997   59655 certs.go:256] generating profile certs ...
	I0708 20:56:20.163097   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.key
	I0708 20:56:20.163220   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key.17bd30e8
	I0708 20:56:20.163259   59655 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key
	I0708 20:56:20.163394   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:20.163478   59655 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:20.163493   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:20.163524   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:20.163558   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:20.163594   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:20.163659   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:20.164318   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:20.198987   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:20.251872   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:20.281444   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:20.305751   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0708 20:56:20.332608   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 20:56:20.365206   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:20.399631   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:20.430016   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:20.462126   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:20.492669   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:20.521867   59655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:20.540725   59655 ssh_runner.go:195] Run: openssl version
	I0708 20:56:20.546789   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:20.558515   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563342   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563430   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.570039   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:20.585367   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:20.601217   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605930   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605993   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.612015   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:20.623796   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:20.635305   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640571   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640649   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.648600   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:20.663899   59655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:20.669383   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:20.675967   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:20.682513   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:20.690280   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:20.698720   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:20.705678   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:20.712524   59655 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:20.712643   59655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:20.712700   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.761032   59655 cri.go:89] found id: ""
	I0708 20:56:20.761107   59655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:20.772712   59655 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:20.772736   59655 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:20.772742   59655 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:20.772793   59655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:20.784860   59655 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:20.785974   59655 kubeconfig.go:125] found "default-k8s-diff-port-071971" server: "https://192.168.72.163:8444"
	I0708 20:56:20.788290   59655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:20.799889   59655 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.163
	I0708 20:56:20.799919   59655 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:20.799947   59655 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:20.800011   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.846864   59655 cri.go:89] found id: ""
	I0708 20:56:20.846936   59655 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:20.865883   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:20.877476   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:20.877495   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:20.877548   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 20:56:20.889786   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:20.889853   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:20.902185   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 20:56:20.913510   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:20.913573   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:20.923964   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.934048   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:20.934131   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.945078   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 20:56:20.955290   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:20.955354   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:20.966182   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:20.977508   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.319213   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.511204   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:23.942367   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:22.755738   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:22.756182   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:22.756243   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:22.756167   60369 retry.go:31] will retry after 1.858003233s: waiting for machine to come up
	I0708 20:56:24.616152   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:24.616674   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:24.616703   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:24.616618   60369 retry.go:31] will retry after 2.203640369s: waiting for machine to come up
	I0708 20:56:22.471504   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.152252924s)
	I0708 20:56:22.471539   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.692407   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.756884   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.892773   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:22.892888   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.393789   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.893298   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.941073   59655 api_server.go:72] duration metric: took 1.048301169s to wait for apiserver process to appear ...
	I0708 20:56:23.941100   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:23.941131   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.221991   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:27.222029   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:27.222048   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:26.441670   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:28.939138   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:27.353017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.353069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.442130   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.447304   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.447326   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.941979   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.951850   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.951878   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.441380   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.452031   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.452069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.941613   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.946045   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.946084   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.441485   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.448847   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.448877   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.941906   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.946380   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.946416   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:30.442013   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:30.447291   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 20:56:30.454664   59655 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:30.454693   59655 api_server.go:131] duration metric: took 6.513586414s to wait for apiserver health ...
	I0708 20:56:30.454701   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:30.454707   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:30.456577   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:26.821665   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:26.822266   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:26.822297   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:26.822209   60369 retry.go:31] will retry after 3.478824168s: waiting for machine to come up
	I0708 20:56:30.302329   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:30.302766   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:30.302796   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:30.302707   60369 retry.go:31] will retry after 3.597512692s: waiting for machine to come up
	I0708 20:56:30.458168   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:30.469918   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:30.492348   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:30.503174   59655 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:30.503210   59655 system_pods.go:61] "coredns-7db6d8ff4d-c4tzw" [e5ea7dde-1134-45d0-b3e2-176e6a8f068e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:30.503218   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [693fd668-83c2-43e6-bf43-7b1a9e654db0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:30.503226   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [eadde33a-b967-4a58-9730-d163e6b8c0c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:30.503233   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [99bd8e55-e0a9-4071-a0f0-dc9d1e79b58d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:30.503238   59655 system_pods.go:61] "kube-proxy-vq4l8" [e2a4779c-e8ed-4f5b-872b-d10604936178] Running
	I0708 20:56:30.503244   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [af6b0a79-be1e-4caa-86a6-47ac782ac438] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:30.503250   59655 system_pods.go:61] "metrics-server-569cc877fc-h2dzd" [7075aa8e-0716-4965-8a13-3ed804190b3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:30.503257   59655 system_pods.go:61] "storage-provisioner" [9fca5ac9-cd65-4257-b012-20ded80a39a5] Running
	I0708 20:56:30.503265   59655 system_pods.go:74] duration metric: took 10.887672ms to wait for pod list to return data ...
	I0708 20:56:30.503279   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:30.509137   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:30.509170   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:30.509189   59655 node_conditions.go:105] duration metric: took 5.903588ms to run NodePressure ...
	I0708 20:56:30.509210   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:30.780430   59655 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788138   59655 kubeadm.go:733] kubelet initialised
	I0708 20:56:30.788168   59655 kubeadm.go:734] duration metric: took 7.711132ms waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788177   59655 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:30.797001   59655 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:30.939824   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:32.940860   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:34.941652   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:33.901849   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902332   58678 main.go:141] libmachine: (no-preload-028021) Found IP for machine: 192.168.39.108
	I0708 20:56:33.902356   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has current primary IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902361   58678 main.go:141] libmachine: (no-preload-028021) Reserving static IP address...
	I0708 20:56:33.902766   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.902797   58678 main.go:141] libmachine: (no-preload-028021) DBG | skip adding static IP to network mk-no-preload-028021 - found existing host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"}
	I0708 20:56:33.902808   58678 main.go:141] libmachine: (no-preload-028021) Reserved static IP address: 192.168.39.108
	I0708 20:56:33.902825   58678 main.go:141] libmachine: (no-preload-028021) Waiting for SSH to be available...
	I0708 20:56:33.902835   58678 main.go:141] libmachine: (no-preload-028021) DBG | Getting to WaitForSSH function...
	I0708 20:56:33.905031   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905318   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.905341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905479   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH client type: external
	I0708 20:56:33.905509   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa (-rw-------)
	I0708 20:56:33.905535   58678 main.go:141] libmachine: (no-preload-028021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:33.905560   58678 main.go:141] libmachine: (no-preload-028021) DBG | About to run SSH command:
	I0708 20:56:33.905573   58678 main.go:141] libmachine: (no-preload-028021) DBG | exit 0
	I0708 20:56:34.035510   58678 main.go:141] libmachine: (no-preload-028021) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:34.035876   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetConfigRaw
	I0708 20:56:34.036501   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.039070   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039467   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.039496   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039711   58678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/config.json ...
	I0708 20:56:34.039936   58678 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:34.039955   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:34.040191   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.042269   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042640   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.042666   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042793   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.042954   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043125   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043292   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.043496   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.043662   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.043671   58678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:34.156092   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:34.156143   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156412   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:56:34.156441   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156639   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.159015   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159420   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.159467   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159606   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.159817   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160015   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160214   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.160407   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.160572   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.160584   58678 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-028021 && echo "no-preload-028021" | sudo tee /etc/hostname
	I0708 20:56:34.286222   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-028021
	
	I0708 20:56:34.286250   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.289067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289376   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.289399   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.289832   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.289991   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.290129   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.290310   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.290471   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.290485   58678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-028021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-028021/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-028021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:34.414724   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:34.414749   58678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:34.414790   58678 buildroot.go:174] setting up certificates
	I0708 20:56:34.414799   58678 provision.go:84] configureAuth start
	I0708 20:56:34.414808   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.415115   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.417919   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418241   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.418273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418491   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.421129   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421603   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.421634   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421756   58678 provision.go:143] copyHostCerts
	I0708 20:56:34.421818   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:34.421839   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:34.421906   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:34.422023   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:34.422034   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:34.422064   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:34.422151   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:34.422161   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:34.422196   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:34.422276   58678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.no-preload-028021 san=[127.0.0.1 192.168.39.108 localhost minikube no-preload-028021]
	I0708 20:56:34.634189   58678 provision.go:177] copyRemoteCerts
	I0708 20:56:34.634253   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:34.634281   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.637123   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637364   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.637396   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637609   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.637912   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.638172   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.638410   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:34.726761   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:34.751947   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:56:34.776165   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:56:34.803849   58678 provision.go:87] duration metric: took 389.036659ms to configureAuth
	I0708 20:56:34.803880   58678 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:34.804125   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:34.804202   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.808559   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.808925   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.808966   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.809164   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.809416   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809572   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809710   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.809874   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.810069   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.810097   58678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:35.096796   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:35.096822   58678 machine.go:97] duration metric: took 1.056870853s to provisionDockerMachine
	I0708 20:56:35.096834   58678 start.go:293] postStartSetup for "no-preload-028021" (driver="kvm2")
	I0708 20:56:35.096847   58678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:35.096864   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.097227   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:35.097266   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.100040   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100428   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.100449   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100637   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.100826   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.100967   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.101128   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.187796   58678 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:35.192221   58678 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:35.192248   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:35.192315   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:35.192383   58678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:35.192467   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:35.204227   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:35.230404   58678 start.go:296] duration metric: took 133.555408ms for postStartSetup
	I0708 20:56:35.230446   58678 fix.go:56] duration metric: took 21.04954132s for fixHost
	I0708 20:56:35.230464   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.233341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233654   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.233685   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233878   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.234070   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234248   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234413   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.234611   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:35.234834   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:35.234849   58678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:35.348439   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472195.300246165
	
	I0708 20:56:35.348459   58678 fix.go:216] guest clock: 1720472195.300246165
	I0708 20:56:35.348468   58678 fix.go:229] Guest: 2024-07-08 20:56:35.300246165 +0000 UTC Remote: 2024-07-08 20:56:35.230449891 +0000 UTC m=+338.995803708 (delta=69.796274ms)
	I0708 20:56:35.348487   58678 fix.go:200] guest clock delta is within tolerance: 69.796274ms
	I0708 20:56:35.348492   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 21.167624903s
	I0708 20:56:35.348509   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.348752   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:35.351300   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351779   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.351806   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351977   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352557   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352725   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352799   58678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:35.352839   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.352942   58678 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:35.352969   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.355646   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356037   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356071   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356117   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356267   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356470   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.356555   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356580   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356642   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.356706   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356770   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.356885   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.357020   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.357154   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.438344   58678 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:35.470518   58678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:35.628022   58678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:35.636390   58678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:35.636468   58678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:35.654729   58678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:35.654753   58678 start.go:494] detecting cgroup driver to use...
	I0708 20:56:35.654824   58678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:35.678564   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:35.697122   58678 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:35.697202   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:35.713388   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:35.728254   58678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:35.874433   58678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:36.062521   58678 docker.go:233] disabling docker service ...
	I0708 20:56:36.062614   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:36.081225   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:36.096855   58678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:36.229455   58678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:36.375525   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:36.390772   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:36.411762   58678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:36.411905   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.423149   58678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:36.423218   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.434145   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.447568   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.458758   58678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:36.469393   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.479663   58678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.501298   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.512407   58678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:36.522400   58678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:36.522469   58678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:36.536310   58678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:36.547955   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:36.680408   58678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:36.860344   58678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:36.860416   58678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:36.866153   58678 start.go:562] Will wait 60s for crictl version
	I0708 20:56:36.866221   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:36.871623   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:36.917564   58678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:36.917655   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.954595   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.985788   58678 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:32.805051   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:35.303979   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.303556   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.303581   59655 pod_ready.go:81] duration metric: took 5.506548207s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.303590   59655 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308571   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.308596   59655 pod_ready.go:81] duration metric: took 4.998994ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308610   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314379   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.314402   59655 pod_ready.go:81] duration metric: took 5.784289ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314411   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.942775   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:39.440483   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.987568   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:36.990699   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991105   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:36.991146   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991378   58678 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:36.996102   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:37.012228   58678 kubeadm.go:877] updating cluster {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:37.012390   58678 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:37.012439   58678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:37.050690   58678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:37.050715   58678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 20:56:37.050765   58678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.050988   58678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.051005   58678 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.051146   58678 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.051199   58678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.051323   58678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.051396   58678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.051560   58678 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.052826   58678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.052840   58678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.052853   58678 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052910   58678 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.052742   58678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.052744   58678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.237714   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.238720   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.246932   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0708 20:56:37.253938   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.256152   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.264291   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.304685   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.316620   58678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0708 20:56:37.316664   58678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.316710   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.352464   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.387003   58678 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0708 20:56:37.387039   58678 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.387078   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513840   58678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0708 20:56:37.513886   58678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.513925   58678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0708 20:56:37.513938   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513958   58678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.513987   58678 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0708 20:56:37.514000   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514016   58678 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.514054   58678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0708 20:56:37.514097   58678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.514061   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514136   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514138   58678 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0708 20:56:37.514078   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.514159   58678 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.514191   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514224   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.535635   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.535678   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.596995   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0708 20:56:37.597092   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.597102   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.651190   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0708 20:56:37.651320   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:37.695843   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0708 20:56:37.695944   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0708 20:56:37.695995   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0708 20:56:37.696018   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:37.696020   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.696052   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:37.695849   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0708 20:56:37.696071   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.695875   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0708 20:56:37.696117   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:37.696211   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:37.721410   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0708 20:56:37.721453   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0708 20:56:37.721536   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0708 20:56:37.721644   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:39.890974   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.19489331s)
	I0708 20:56:39.891017   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0708 20:56:39.891070   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (2.194976871s)
	I0708 20:56:39.891096   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0708 20:56:39.891107   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.194875907s)
	I0708 20:56:39.891117   58678 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891120   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0708 20:56:39.891156   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2: (2.194966409s)
	I0708 20:56:39.891175   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891184   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0708 20:56:39.891196   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.169535432s)
	I0708 20:56:39.891212   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0708 20:56:37.824606   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.824634   59655 pod_ready.go:81] duration metric: took 1.510214968s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.824646   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829963   59655 pod_ready.go:92] pod "kube-proxy-vq4l8" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.829988   59655 pod_ready.go:81] duration metric: took 5.334688ms for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829997   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338575   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:38.338611   59655 pod_ready.go:81] duration metric: took 508.60515ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338625   59655 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:40.346498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.939773   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:43.941838   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.962256   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.071056184s)
	I0708 20:56:41.962281   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0708 20:56:41.962304   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:41.962349   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:44.325730   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (2.363358371s)
	I0708 20:56:44.325760   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0708 20:56:44.325789   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:44.325853   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:42.845177   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:44.846215   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.441086   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:48.939348   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.588882   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.263001s)
	I0708 20:56:46.588909   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0708 20:56:46.588931   58678 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:46.588980   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:50.590689   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.001689035s)
	I0708 20:56:50.590724   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0708 20:56:50.590758   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:50.590813   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:47.345179   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:49.346736   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:51.846003   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:50.940095   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:53.441346   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:52.446198   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (1.855362154s)
	I0708 20:56:52.446229   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0708 20:56:52.446247   58678 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:52.446284   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:53.400379   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0708 20:56:53.400419   58678 cache_images.go:123] Successfully loaded all cached images
	I0708 20:56:53.400424   58678 cache_images.go:92] duration metric: took 16.349697925s to LoadCachedImages
	I0708 20:56:53.400436   58678 kubeadm.go:928] updating node { 192.168.39.108 8443 v1.30.2 crio true true} ...
	I0708 20:56:53.400599   58678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-028021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:53.400692   58678 ssh_runner.go:195] Run: crio config
	I0708 20:56:53.452091   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:56:53.452117   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:53.452131   58678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:53.452150   58678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.108 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-028021 NodeName:no-preload-028021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:53.452285   58678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-028021"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:53.452344   58678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:53.464447   58678 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:53.464522   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:53.474930   58678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0708 20:56:53.493701   58678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:53.511491   58678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0708 20:56:53.530848   58678 ssh_runner.go:195] Run: grep 192.168.39.108	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:53.534931   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:53.547590   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:53.658960   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:53.677127   58678 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021 for IP: 192.168.39.108
	I0708 20:56:53.677151   58678 certs.go:194] generating shared ca certs ...
	I0708 20:56:53.677169   58678 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:53.677296   58678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:53.677330   58678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:53.677338   58678 certs.go:256] generating profile certs ...
	I0708 20:56:53.677420   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.key
	I0708 20:56:53.677471   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key.c3084b2b
	I0708 20:56:53.677511   58678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key
	I0708 20:56:53.677613   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:53.677639   58678 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:53.677645   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:53.677677   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:53.677752   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:53.677785   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:53.677825   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:53.680483   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:53.739386   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:53.770850   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:53.813958   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:53.850256   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 20:56:53.891539   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:53.921136   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:53.948966   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:53.977129   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:54.002324   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:54.028222   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:54.054099   58678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:54.073386   58678 ssh_runner.go:195] Run: openssl version
	I0708 20:56:54.079883   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:54.092980   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097451   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097503   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.103507   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:54.115123   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:54.126757   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131534   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131579   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.137333   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:54.148368   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:54.159628   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164230   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164280   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.170068   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:54.182152   58678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:54.187146   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:54.193425   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:54.200491   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:54.207006   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:54.213285   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:54.220313   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:54.227497   58678 kubeadm.go:391] StartCluster: {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:54.227597   58678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:54.227657   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.273025   58678 cri.go:89] found id: ""
	I0708 20:56:54.273094   58678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:54.284942   58678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:54.284965   58678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:54.284972   58678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:54.285023   58678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:54.296666   58678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:54.297740   58678 kubeconfig.go:125] found "no-preload-028021" server: "https://192.168.39.108:8443"
	I0708 20:56:54.299928   58678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:54.310186   58678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.108
	I0708 20:56:54.310224   58678 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:54.310235   58678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:54.310290   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.351640   58678 cri.go:89] found id: ""
	I0708 20:56:54.351709   58678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:54.370292   58678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:54.380551   58678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:54.380571   58678 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:54.380611   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:54.391462   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:54.391525   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:54.401804   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:54.411423   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:54.411501   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:54.422126   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.432236   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:54.432299   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.443001   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:54.454210   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:54.454271   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:54.465426   58678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:54.477714   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:54.593844   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.651092   58678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057214047s)
	I0708 20:56:55.651120   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.862342   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.952093   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:56.070164   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:56.070232   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:53.846869   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.847242   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.941645   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:58.440406   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:56.570644   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.071067   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.099879   58678 api_server.go:72] duration metric: took 1.02971362s to wait for apiserver process to appear ...
	I0708 20:56:57.099907   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:57.099932   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.102677   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.102805   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.102854   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.143035   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.143069   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.600694   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.605315   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:00.605349   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:01.100628   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.106209   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.106235   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:58.345619   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:00.346515   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:01.600656   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.605348   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.605381   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.101023   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.105457   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.105490   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.600058   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.604370   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.604397   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.100641   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.105655   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:03.105685   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.600193   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.604714   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 20:57:03.617761   58678 api_server.go:141] control plane version: v1.30.2
	I0708 20:57:03.617795   58678 api_server.go:131] duration metric: took 6.517881236s to wait for apiserver health ...
	I0708 20:57:03.617805   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:57:03.617811   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:57:03.619739   58678 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:57:00.940450   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.448484   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.621363   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:57:03.635846   58678 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:57:03.667045   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:57:03.686236   58678 system_pods.go:59] 8 kube-system pods found
	I0708 20:57:03.686308   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:57:03.686322   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:57:03.686334   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:57:03.686348   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:57:03.686354   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 20:57:03.686363   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:57:03.686371   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:57:03.686379   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 20:57:03.686390   58678 system_pods.go:74] duration metric: took 19.320099ms to wait for pod list to return data ...
	I0708 20:57:03.686402   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:57:03.696401   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:57:03.696436   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 20:57:03.696449   58678 node_conditions.go:105] duration metric: took 10.038061ms to run NodePressure ...
	I0708 20:57:03.696474   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:57:03.981698   58678 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987357   58678 kubeadm.go:733] kubelet initialised
	I0708 20:57:03.987379   58678 kubeadm.go:734] duration metric: took 5.653044ms waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987387   58678 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:03.993341   58678 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:03.999133   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999165   58678 pod_ready.go:81] duration metric: took 5.798521ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:03.999177   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999188   58678 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.004640   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004666   58678 pod_ready.go:81] duration metric: took 5.471219ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.004676   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004685   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.011313   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011342   58678 pod_ready.go:81] duration metric: took 6.65044ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.011354   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011364   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.071038   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071092   58678 pod_ready.go:81] duration metric: took 59.716762ms for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.071105   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071114   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.470702   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470732   58678 pod_ready.go:81] duration metric: took 399.6044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.470743   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470753   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.871002   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871036   58678 pod_ready.go:81] duration metric: took 400.275337ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.871045   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871052   58678 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:05.270858   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270883   58678 pod_ready.go:81] duration metric: took 399.822389ms for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:05.270892   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270899   58678 pod_ready.go:38] duration metric: took 1.283504929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:05.270914   58678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 20:57:05.284879   58678 ops.go:34] apiserver oom_adj: -16
	I0708 20:57:05.284900   58678 kubeadm.go:591] duration metric: took 10.999921787s to restartPrimaryControlPlane
	I0708 20:57:05.284912   58678 kubeadm.go:393] duration metric: took 11.057424996s to StartCluster
	I0708 20:57:05.284931   58678 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.285024   58678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:57:05.287297   58678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.287607   58678 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 20:57:05.287708   58678 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 20:57:05.287790   58678 addons.go:69] Setting storage-provisioner=true in profile "no-preload-028021"
	I0708 20:57:05.287807   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:57:05.287809   58678 addons.go:69] Setting default-storageclass=true in profile "no-preload-028021"
	I0708 20:57:05.287845   58678 addons.go:69] Setting metrics-server=true in profile "no-preload-028021"
	I0708 20:57:05.287900   58678 addons.go:234] Setting addon metrics-server=true in "no-preload-028021"
	W0708 20:57:05.287912   58678 addons.go:243] addon metrics-server should already be in state true
	I0708 20:57:05.287946   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.287854   58678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-028021"
	I0708 20:57:05.287825   58678 addons.go:234] Setting addon storage-provisioner=true in "no-preload-028021"
	W0708 20:57:05.288007   58678 addons.go:243] addon storage-provisioner should already be in state true
	I0708 20:57:05.288040   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.288276   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288308   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288380   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288382   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288430   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288413   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.289690   58678 out.go:177] * Verifying Kubernetes components...
	I0708 20:57:05.291336   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:57:05.310203   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0708 20:57:05.310610   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.311107   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.311129   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.311527   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.311990   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.312026   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.332966   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0708 20:57:05.332984   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0708 20:57:05.333056   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I0708 20:57:05.333449   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333466   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333497   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333994   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334014   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334138   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334146   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334158   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334163   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334347   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334514   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.334640   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334683   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334822   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.335285   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.335304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.337444   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.338763   58678 addons.go:234] Setting addon default-storageclass=true in "no-preload-028021"
	W0708 20:57:05.338785   58678 addons.go:243] addon default-storageclass should already be in state true
	I0708 20:57:05.338814   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.339217   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.339304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.339800   58678 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 20:57:05.341280   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 20:57:05.341303   58678 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 20:57:05.341327   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.344838   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345488   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.345504   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345683   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.345892   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.346146   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.346326   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.359060   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0708 20:57:05.359692   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.360186   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.360207   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.360545   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.361128   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.361173   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.361352   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0708 20:57:05.361971   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.362509   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.362525   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.362911   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.363148   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.364747   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.366914   58678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:57:05.368450   58678 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.368467   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 20:57:05.368483   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.372067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372368   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.372387   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372767   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.373030   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.373235   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.373389   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.379255   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0708 20:57:05.379732   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.380405   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.380428   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.380832   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.381039   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.382973   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.383191   58678 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.383211   58678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 20:57:05.383231   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.386273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386682   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.386705   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386997   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.387184   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.387336   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.387497   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.506081   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:57:05.525373   58678 node_ready.go:35] waiting up to 6m0s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:05.594638   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 20:57:05.594665   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 20:57:05.615378   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.620306   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 20:57:05.620331   58678 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 20:57:05.639840   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.692078   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:05.692109   58678 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 20:57:05.756364   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:06.822244   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206830336s)
	I0708 20:57:06.822310   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18243745s)
	I0708 20:57:06.822323   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822385   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065981271s)
	I0708 20:57:06.822418   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822432   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822390   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822351   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822504   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822850   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822870   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822879   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822886   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822955   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.822971   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822976   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822993   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822995   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823009   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823020   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823010   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823051   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823154   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823164   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823366   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.823380   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823390   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825436   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.825455   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825465   58678 addons.go:475] Verifying addon metrics-server=true in "no-preload-028021"
	I0708 20:57:06.830088   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.830108   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.830406   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.830420   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.830423   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.832322   58678 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0708 20:57:02.845629   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.353827   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.940469   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:08.439911   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:06.833974   58678 addons.go:510] duration metric: took 1.546270475s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0708 20:57:07.529328   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:09.529406   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:11.030134   58678 node_ready.go:49] node "no-preload-028021" has status "Ready":"True"
	I0708 20:57:11.030162   58678 node_ready.go:38] duration metric: took 5.504751555s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:11.030174   58678 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:11.035309   58678 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039750   58678 pod_ready.go:92] pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.039772   58678 pod_ready.go:81] duration metric: took 4.436756ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039783   58678 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044726   58678 pod_ready.go:92] pod "etcd-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.044748   58678 pod_ready.go:81] duration metric: took 4.958058ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044756   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049083   58678 pod_ready.go:92] pod "kube-apiserver-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.049104   58678 pod_ready.go:81] duration metric: took 4.34014ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049115   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:07.846290   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.344964   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.939618   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.445191   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.056307   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.056817   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.063838   58678 pod_ready.go:92] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.063864   58678 pod_ready.go:81] duration metric: took 5.014740635s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.063875   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082486   58678 pod_ready.go:92] pod "kube-proxy-6p6l6" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.082529   58678 pod_ready.go:81] duration metric: took 18.642044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082545   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092312   58678 pod_ready.go:92] pod "kube-scheduler-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.092337   58678 pod_ready.go:81] duration metric: took 9.783638ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092347   58678 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.353120   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:57:16.353203   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:57:16.355269   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:57:16.355317   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:57:16.355404   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:57:16.355558   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:57:16.355708   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:57:16.355815   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:57:16.358151   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:57:16.358312   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:57:16.358411   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:57:16.358531   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:57:16.358641   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:57:16.358732   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:57:16.358798   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:57:16.358893   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:57:16.359004   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:57:16.359128   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:57:16.359209   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:57:16.359288   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:57:16.359384   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:57:16.359509   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:57:16.359614   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:57:16.359725   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:57:16.359794   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:57:16.359881   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:57:16.359963   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:57:16.360002   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:57:16.360099   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:57:16.361960   57466 out.go:204]   - Booting up control plane ...
	I0708 20:57:16.362053   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:57:16.362196   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:57:16.362283   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:57:16.362402   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:57:16.362589   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:57:16.362819   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:57:16.362930   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363170   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363242   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363473   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363580   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363786   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363873   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364093   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364247   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364435   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364445   57466 kubeadm.go:309] 
	I0708 20:57:16.364476   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:57:16.364533   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:57:16.364541   57466 kubeadm.go:309] 
	I0708 20:57:16.364601   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:57:16.364636   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:57:16.364796   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:57:16.364820   57466 kubeadm.go:309] 
	I0708 20:57:16.364958   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:57:16.365016   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:57:16.365057   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:57:16.365063   57466 kubeadm.go:309] 
	I0708 20:57:16.365208   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:57:16.365339   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:57:16.365356   57466 kubeadm.go:309] 
	I0708 20:57:16.365490   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:57:16.365589   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:57:16.365694   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:57:16.365869   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:57:16.365969   57466 kubeadm.go:309] 
	I0708 20:57:16.365972   57466 kubeadm.go:393] duration metric: took 7m56.670441698s to StartCluster
	I0708 20:57:16.366023   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:57:16.366090   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:57:16.435868   57466 cri.go:89] found id: ""
	I0708 20:57:16.435896   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.435904   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:57:16.435910   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:57:16.435969   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:57:16.478844   57466 cri.go:89] found id: ""
	I0708 20:57:16.478881   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.478896   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:57:16.478904   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:57:16.478974   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:57:16.517414   57466 cri.go:89] found id: ""
	I0708 20:57:16.517439   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.517448   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:57:16.517455   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:57:16.517516   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:57:16.557036   57466 cri.go:89] found id: ""
	I0708 20:57:16.557063   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.557074   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:57:16.557081   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:57:16.557153   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:57:16.593604   57466 cri.go:89] found id: ""
	I0708 20:57:16.593631   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.593641   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:57:16.593648   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:57:16.593704   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:57:16.634143   57466 cri.go:89] found id: ""
	I0708 20:57:16.634173   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.634183   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:57:16.634190   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:57:16.634248   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:57:16.676553   57466 cri.go:89] found id: ""
	I0708 20:57:16.676585   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.676595   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:57:16.676602   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:57:16.676663   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:57:16.715652   57466 cri.go:89] found id: ""
	I0708 20:57:16.715674   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.715682   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:57:16.715692   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:57:16.715703   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:57:16.730747   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:57:16.730776   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:57:16.814950   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:57:16.814976   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:57:16.815005   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:57:16.921144   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:57:16.921194   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:57:16.973261   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:57:16.973294   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 20:57:17.031242   57466 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0708 20:57:17.031307   57466 out.go:239] * 
	W0708 20:57:17.031362   57466 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.031389   57466 out.go:239] * 
	W0708 20:57:17.032214   57466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:57:17.035847   57466 out.go:177] 
	W0708 20:57:17.037198   57466 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.037247   57466 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0708 20:57:17.037274   57466 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0708 20:57:17.039077   57466 out.go:177] 
	I0708 20:57:12.345241   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:14.346235   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.347467   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.231759619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720472238231724820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c03f4cec-7fd3-42cb-a349-23186ebffb7f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.232703814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06d2121d-0ff7-450f-9909-756ae5d3b94b name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.232756349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06d2121d-0ff7-450f-9909-756ae5d3b94b name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.232788379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=06d2121d-0ff7-450f-9909-756ae5d3b94b name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.271296210Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95dc10b9-d80c-4eaf-9b9b-f30cbfe81e08 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.271434149Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95dc10b9-d80c-4eaf-9b9b-f30cbfe81e08 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.272712696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f346e4f8-5d94-4e1a-818c-05e0b7bc67d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.273144448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720472238273105073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f346e4f8-5d94-4e1a-818c-05e0b7bc67d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.273790091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b6fd552-3534-4be0-8f8f-f214d5455c5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.273840356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b6fd552-3534-4be0-8f8f-f214d5455c5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.273872797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1b6fd552-3534-4be0-8f8f-f214d5455c5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.311633586Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd9b7e3d-a602-4525-a7d8-080b6e407204 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.311707129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd9b7e3d-a602-4525-a7d8-080b6e407204 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.313715498Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=116cff99-655b-49bc-801b-7b60163db6d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.314105340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720472238314071683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=116cff99-655b-49bc-801b-7b60163db6d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.314768315Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=acfc4578-ae0d-428b-924d-445d5950f41e name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.314862085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=acfc4578-ae0d-428b-924d-445d5950f41e name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.314933348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=acfc4578-ae0d-428b-924d-445d5950f41e name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.355763220Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29b3f14f-2113-4dbe-81a4-6bf64d892649 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.355867510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29b3f14f-2113-4dbe-81a4-6bf64d892649 name=/runtime.v1.RuntimeService/Version
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.357496627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2318085-cc21-4e96-b2bd-bba7ad9ffedc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.357938335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720472238357913530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2318085-cc21-4e96-b2bd-bba7ad9ffedc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.358668714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8440388e-9c97-4910-ab83-0b8392d4eb46 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.358740546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8440388e-9c97-4910-ab83-0b8392d4eb46 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 20:57:18 old-k8s-version-914355 crio[647]: time="2024-07-08 20:57:18.358775641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8440388e-9c97-4910-ab83-0b8392d4eb46 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul 8 20:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050631] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039837] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.623579] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul 8 20:49] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.602924] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.192762] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.057317] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062771] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.200906] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.157667] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.288740] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +6.100045] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.067577] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.762847] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +12.466178] kauditd_printk_skb: 46 callbacks suppressed
	[Jul 8 20:53] systemd-fstab-generator[5013]: Ignoring "noauto" option for root device
	[Jul 8 20:55] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.059941] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:57:18 up 8 min,  0 users,  load average: 0.05, 0.07, 0.03
	Linux old-k8s-version-914355 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000be7920, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000dd6030, 0x24, 0x0, ...)
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]: net.(*Dialer).DialContext(0xc000b3d440, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000dd6030, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b468a0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000dd6030, 0x24, 0x60, 0x7fa9ac32f808, 0x118, ...)
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]: net/http.(*Transport).dial(0xc000a63540, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000dd6030, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]: net/http.(*Transport).dialConn(0xc000a63540, 0x4f7fe00, 0xc000052030, 0x0, 0xc000b9d380, 0x5, 0xc000dd6030, 0x24, 0x0, 0xc000be5320, ...)
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]: net/http.(*Transport).dialConnFor(0xc000a63540, 0xc000dd8000)
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]: created by net/http.(*Transport).queueForDial
	Jul 08 20:57:16 old-k8s-version-914355 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 08 20:57:16 old-k8s-version-914355 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 08 20:57:16 old-k8s-version-914355 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 08 20:57:17 old-k8s-version-914355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 08 20:57:17 old-k8s-version-914355 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 08 20:57:17 old-k8s-version-914355 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 08 20:57:17 old-k8s-version-914355 kubelet[5541]: I0708 20:57:17.377197    5541 server.go:416] Version: v1.20.0
	Jul 08 20:57:17 old-k8s-version-914355 kubelet[5541]: I0708 20:57:17.378125    5541 server.go:837] Client rotation is on, will bootstrap in background
	Jul 08 20:57:17 old-k8s-version-914355 kubelet[5541]: I0708 20:57:17.383806    5541 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 08 20:57:17 old-k8s-version-914355 kubelet[5541]: I0708 20:57:17.386071    5541 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 08 20:57:17 old-k8s-version-914355 kubelet[5541]: W0708 20:57:17.386248    5541 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914355 -n old-k8s-version-914355
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 2 (241.971479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-914355" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (507.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-239931 --alsologtostderr -v=3
E0708 20:49:23.843722   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-239931 --alsologtostderr -v=3: exit status 82 (2m0.562141879s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-239931"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:49:13.623534   57718 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:49:13.623812   57718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:49:13.623822   57718 out.go:304] Setting ErrFile to fd 2...
	I0708 20:49:13.623826   57718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:49:13.624011   57718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:49:13.624251   57718 out.go:298] Setting JSON to false
	I0708 20:49:13.624327   57718 mustload.go:65] Loading cluster: embed-certs-239931
	I0708 20:49:13.624660   57718 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:49:13.624726   57718 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/config.json ...
	I0708 20:49:13.624944   57718 mustload.go:65] Loading cluster: embed-certs-239931
	I0708 20:49:13.625048   57718 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:49:13.625073   57718 stop.go:39] StopHost: embed-certs-239931
	I0708 20:49:13.625422   57718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:49:13.625467   57718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:49:13.643225   57718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33053
	I0708 20:49:13.643936   57718 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:49:13.644549   57718 main.go:141] libmachine: Using API Version  1
	I0708 20:49:13.644578   57718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:49:13.645048   57718 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:49:13.647727   57718 out.go:177] * Stopping node "embed-certs-239931"  ...
	I0708 20:49:13.649062   57718 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0708 20:49:13.649092   57718 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:49:13.649399   57718 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0708 20:49:13.649438   57718 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:49:13.653108   57718 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:49:13.653610   57718 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:47:39 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:49:13.653631   57718 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:49:13.653914   57718 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:49:13.654139   57718 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:49:13.654303   57718 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:49:13.654857   57718 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:49:13.813586   57718 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0708 20:49:13.861364   57718 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0708 20:49:13.925725   57718 main.go:141] libmachine: Stopping "embed-certs-239931"...
	I0708 20:49:13.925777   57718 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 20:49:13.927866   57718 main.go:141] libmachine: (embed-certs-239931) Calling .Stop
	I0708 20:49:13.931903   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 0/120
	I0708 20:49:14.934323   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 1/120
	I0708 20:49:15.935631   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 2/120
	I0708 20:49:16.937153   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 3/120
	I0708 20:49:17.938768   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 4/120
	I0708 20:49:18.940524   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 5/120
	I0708 20:49:19.942230   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 6/120
	I0708 20:49:20.944499   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 7/120
	I0708 20:49:21.946238   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 8/120
	I0708 20:49:22.947889   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 9/120
	I0708 20:49:23.949882   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 10/120
	I0708 20:49:24.951181   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 11/120
	I0708 20:49:25.952656   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 12/120
	I0708 20:49:26.954035   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 13/120
	I0708 20:49:27.955497   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 14/120
	I0708 20:49:28.957779   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 15/120
	I0708 20:49:29.959683   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 16/120
	I0708 20:49:30.961237   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 17/120
	I0708 20:49:31.962788   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 18/120
	I0708 20:49:32.964208   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 19/120
	I0708 20:49:33.966037   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 20/120
	I0708 20:49:34.967402   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 21/120
	I0708 20:49:35.968790   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 22/120
	I0708 20:49:36.970105   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 23/120
	I0708 20:49:37.972342   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 24/120
	I0708 20:49:38.974047   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 25/120
	I0708 20:49:39.975366   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 26/120
	I0708 20:49:40.977095   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 27/120
	I0708 20:49:41.978362   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 28/120
	I0708 20:49:42.979723   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 29/120
	I0708 20:49:43.982093   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 30/120
	I0708 20:49:44.983335   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 31/120
	I0708 20:49:45.984611   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 32/120
	I0708 20:49:46.986138   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 33/120
	I0708 20:49:47.987297   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 34/120
	I0708 20:49:48.988624   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 35/120
	I0708 20:49:49.989782   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 36/120
	I0708 20:49:50.991920   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 37/120
	I0708 20:49:51.994093   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 38/120
	I0708 20:49:52.995694   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 39/120
	I0708 20:49:53.997728   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 40/120
	I0708 20:49:54.998914   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 41/120
	I0708 20:49:56.000614   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 42/120
	I0708 20:49:57.002065   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 43/120
	I0708 20:49:58.003552   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 44/120
	I0708 20:49:59.005513   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 45/120
	I0708 20:50:00.007084   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 46/120
	I0708 20:50:01.008419   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 47/120
	I0708 20:50:02.009764   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 48/120
	I0708 20:50:03.011185   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 49/120
	I0708 20:50:04.013705   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 50/120
	I0708 20:50:05.016361   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 51/120
	I0708 20:50:06.017814   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 52/120
	I0708 20:50:07.019145   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 53/120
	I0708 20:50:08.020558   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 54/120
	I0708 20:50:09.022869   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 55/120
	I0708 20:50:10.024273   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 56/120
	I0708 20:50:11.025589   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 57/120
	I0708 20:50:12.027134   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 58/120
	I0708 20:50:13.028824   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 59/120
	I0708 20:50:14.030876   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 60/120
	I0708 20:50:15.033296   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 61/120
	I0708 20:50:16.034548   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 62/120
	I0708 20:50:17.035957   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 63/120
	I0708 20:50:18.037267   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 64/120
	I0708 20:50:19.039283   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 65/120
	I0708 20:50:20.040559   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 66/120
	I0708 20:50:21.041863   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 67/120
	I0708 20:50:22.043442   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 68/120
	I0708 20:50:23.044859   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 69/120
	I0708 20:50:24.047413   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 70/120
	I0708 20:50:25.049042   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 71/120
	I0708 20:50:26.050250   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 72/120
	I0708 20:50:27.051877   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 73/120
	I0708 20:50:28.054315   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 74/120
	I0708 20:50:29.055808   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 75/120
	I0708 20:50:30.057216   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 76/120
	I0708 20:50:31.058626   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 77/120
	I0708 20:50:32.060051   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 78/120
	I0708 20:50:33.061434   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 79/120
	I0708 20:50:34.062902   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 80/120
	I0708 20:50:35.064277   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 81/120
	I0708 20:50:36.065542   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 82/120
	I0708 20:50:37.067757   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 83/120
	I0708 20:50:38.069969   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 84/120
	I0708 20:50:39.072105   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 85/120
	I0708 20:50:40.074040   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 86/120
	I0708 20:50:41.075519   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 87/120
	I0708 20:50:42.077421   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 88/120
	I0708 20:50:43.078931   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 89/120
	I0708 20:50:44.080905   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 90/120
	I0708 20:50:45.082517   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 91/120
	I0708 20:50:46.084067   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 92/120
	I0708 20:50:47.085698   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 93/120
	I0708 20:50:48.087125   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 94/120
	I0708 20:50:49.089244   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 95/120
	I0708 20:50:50.091306   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 96/120
	I0708 20:50:51.092812   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 97/120
	I0708 20:50:52.094122   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 98/120
	I0708 20:50:53.095562   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 99/120
	I0708 20:50:54.097661   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 100/120
	I0708 20:50:55.099242   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 101/120
	I0708 20:50:56.100549   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 102/120
	I0708 20:50:57.102313   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 103/120
	I0708 20:50:58.104002   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 104/120
	I0708 20:50:59.105919   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 105/120
	I0708 20:51:00.107376   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 106/120
	I0708 20:51:01.109004   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 107/120
	I0708 20:51:02.110296   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 108/120
	I0708 20:51:03.112550   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 109/120
	I0708 20:51:04.114643   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 110/120
	I0708 20:51:05.116260   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 111/120
	I0708 20:51:06.117761   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 112/120
	I0708 20:51:07.119241   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 113/120
	I0708 20:51:08.120556   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 114/120
	I0708 20:51:09.122750   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 115/120
	I0708 20:51:10.124498   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 116/120
	I0708 20:51:11.126235   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 117/120
	I0708 20:51:12.127732   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 118/120
	I0708 20:51:13.129123   57718 main.go:141] libmachine: (embed-certs-239931) Waiting for machine to stop 119/120
	I0708 20:51:14.130550   57718 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0708 20:51:14.130607   57718 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0708 20:51:14.132781   57718 out.go:177] 
	W0708 20:51:14.134162   57718 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0708 20:51:14.134181   57718 out.go:239] * 
	* 
	W0708 20:51:14.137540   57718 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:51:14.139058   57718 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-239931 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-239931 -n embed-certs-239931
E0708 20:51:29.733227   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-239931 -n embed-certs-239931: exit status 3 (18.599579062s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:51:32.739762   58855 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	E0708 20:51:32.739782   58855 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-239931" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028021 -n no-preload-028021
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028021 -n no-preload-028021: exit status 3 (3.1678306s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:50:47.011780   58520 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0708 20:50:47.011804   58520 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-028021 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-028021 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155283473s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-028021 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028021 -n no-preload-028021
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028021 -n no-preload-028021: exit status 3 (3.060401825s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:50:56.227773   58632 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0708 20:50:56.227789   58632 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-028021" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-071971 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-071971 --alsologtostderr -v=3: exit status 82 (2m0.497912426s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-071971"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:51:05.893096   58805 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:51:05.893215   58805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:51:05.893224   58805 out.go:304] Setting ErrFile to fd 2...
	I0708 20:51:05.893228   58805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:51:05.893442   58805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:51:05.893657   58805 out.go:298] Setting JSON to false
	I0708 20:51:05.893731   58805 mustload.go:65] Loading cluster: default-k8s-diff-port-071971
	I0708 20:51:05.894057   58805 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:51:05.894124   58805 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:51:05.894294   58805 mustload.go:65] Loading cluster: default-k8s-diff-port-071971
	I0708 20:51:05.894399   58805 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:51:05.894430   58805 stop.go:39] StopHost: default-k8s-diff-port-071971
	I0708 20:51:05.894776   58805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:51:05.894822   58805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:51:05.909715   58805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0708 20:51:05.910192   58805 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:51:05.910796   58805 main.go:141] libmachine: Using API Version  1
	I0708 20:51:05.910823   58805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:51:05.911207   58805 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:51:05.914485   58805 out.go:177] * Stopping node "default-k8s-diff-port-071971"  ...
	I0708 20:51:05.915823   58805 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0708 20:51:05.915850   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:51:05.916129   58805 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0708 20:51:05.916155   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:51:05.918976   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:51:05.919363   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:51:05.919395   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:51:05.919556   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:51:05.919735   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:51:05.919893   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:51:05.920026   58805 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:51:06.021200   58805 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0708 20:51:06.082950   58805 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0708 20:51:06.147669   58805 main.go:141] libmachine: Stopping "default-k8s-diff-port-071971"...
	I0708 20:51:06.147734   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 20:51:06.149213   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Stop
	I0708 20:51:06.152533   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 0/120
	I0708 20:51:07.154073   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 1/120
	I0708 20:51:08.155530   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 2/120
	I0708 20:51:09.156882   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 3/120
	I0708 20:51:10.158362   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 4/120
	I0708 20:51:11.160496   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 5/120
	I0708 20:51:12.161728   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 6/120
	I0708 20:51:13.163142   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 7/120
	I0708 20:51:14.164474   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 8/120
	I0708 20:51:15.165834   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 9/120
	I0708 20:51:16.167205   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 10/120
	I0708 20:51:17.168726   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 11/120
	I0708 20:51:18.169997   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 12/120
	I0708 20:51:19.171276   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 13/120
	I0708 20:51:20.172787   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 14/120
	I0708 20:51:21.174768   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 15/120
	I0708 20:51:22.176198   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 16/120
	I0708 20:51:23.177943   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 17/120
	I0708 20:51:24.179279   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 18/120
	I0708 20:51:25.180695   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 19/120
	I0708 20:51:26.181962   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 20/120
	I0708 20:51:27.183516   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 21/120
	I0708 20:51:28.185171   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 22/120
	I0708 20:51:29.186673   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 23/120
	I0708 20:51:30.188211   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 24/120
	I0708 20:51:31.190650   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 25/120
	I0708 20:51:32.192181   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 26/120
	I0708 20:51:33.193538   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 27/120
	I0708 20:51:34.194930   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 28/120
	I0708 20:51:35.196304   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 29/120
	I0708 20:51:36.198514   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 30/120
	I0708 20:51:37.199764   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 31/120
	I0708 20:51:38.201335   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 32/120
	I0708 20:51:39.202631   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 33/120
	I0708 20:51:40.204216   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 34/120
	I0708 20:51:41.206457   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 35/120
	I0708 20:51:42.207972   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 36/120
	I0708 20:51:43.209296   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 37/120
	I0708 20:51:44.210532   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 38/120
	I0708 20:51:45.211967   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 39/120
	I0708 20:51:46.213454   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 40/120
	I0708 20:51:47.215057   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 41/120
	I0708 20:51:48.216403   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 42/120
	I0708 20:51:49.218203   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 43/120
	I0708 20:51:50.219380   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 44/120
	I0708 20:51:51.221423   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 45/120
	I0708 20:51:52.222768   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 46/120
	I0708 20:51:53.224138   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 47/120
	I0708 20:51:54.225602   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 48/120
	I0708 20:51:55.226849   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 49/120
	I0708 20:51:56.229171   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 50/120
	I0708 20:51:57.230475   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 51/120
	I0708 20:51:58.231845   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 52/120
	I0708 20:51:59.233401   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 53/120
	I0708 20:52:00.235491   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 54/120
	I0708 20:52:01.237357   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 55/120
	I0708 20:52:02.238753   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 56/120
	I0708 20:52:03.239987   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 57/120
	I0708 20:52:04.241244   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 58/120
	I0708 20:52:05.242445   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 59/120
	I0708 20:52:06.244466   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 60/120
	I0708 20:52:07.246143   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 61/120
	I0708 20:52:08.247486   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 62/120
	I0708 20:52:09.248908   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 63/120
	I0708 20:52:10.250258   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 64/120
	I0708 20:52:11.252427   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 65/120
	I0708 20:52:12.253882   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 66/120
	I0708 20:52:13.255348   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 67/120
	I0708 20:52:14.256951   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 68/120
	I0708 20:52:15.258785   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 69/120
	I0708 20:52:16.261279   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 70/120
	I0708 20:52:17.262951   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 71/120
	I0708 20:52:18.264494   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 72/120
	I0708 20:52:19.266095   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 73/120
	I0708 20:52:20.267492   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 74/120
	I0708 20:52:21.269417   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 75/120
	I0708 20:52:22.270886   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 76/120
	I0708 20:52:23.272299   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 77/120
	I0708 20:52:24.273753   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 78/120
	I0708 20:52:25.275049   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 79/120
	I0708 20:52:26.277220   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 80/120
	I0708 20:52:27.278696   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 81/120
	I0708 20:52:28.280023   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 82/120
	I0708 20:52:29.282111   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 83/120
	I0708 20:52:30.283612   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 84/120
	I0708 20:52:31.285728   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 85/120
	I0708 20:52:32.286989   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 86/120
	I0708 20:52:33.288331   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 87/120
	I0708 20:52:34.289679   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 88/120
	I0708 20:52:35.291053   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 89/120
	I0708 20:52:36.293408   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 90/120
	I0708 20:52:37.294852   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 91/120
	I0708 20:52:38.296227   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 92/120
	I0708 20:52:39.297504   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 93/120
	I0708 20:52:40.299131   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 94/120
	I0708 20:52:41.301041   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 95/120
	I0708 20:52:42.302350   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 96/120
	I0708 20:52:43.303735   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 97/120
	I0708 20:52:44.305011   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 98/120
	I0708 20:52:45.306480   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 99/120
	I0708 20:52:46.308612   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 100/120
	I0708 20:52:47.310165   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 101/120
	I0708 20:52:48.311431   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 102/120
	I0708 20:52:49.312740   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 103/120
	I0708 20:52:50.314154   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 104/120
	I0708 20:52:51.316337   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 105/120
	I0708 20:52:52.317511   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 106/120
	I0708 20:52:53.318875   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 107/120
	I0708 20:52:54.320310   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 108/120
	I0708 20:52:55.321747   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 109/120
	I0708 20:52:56.323964   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 110/120
	I0708 20:52:57.325342   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 111/120
	I0708 20:52:58.326655   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 112/120
	I0708 20:52:59.328126   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 113/120
	I0708 20:53:00.329529   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 114/120
	I0708 20:53:01.330978   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 115/120
	I0708 20:53:02.332402   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 116/120
	I0708 20:53:03.333944   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 117/120
	I0708 20:53:04.335649   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 118/120
	I0708 20:53:05.337827   58805 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for machine to stop 119/120
	I0708 20:53:06.338470   58805 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0708 20:53:06.338537   58805 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0708 20:53:06.340600   58805 out.go:177] 
	W0708 20:53:06.342055   58805 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0708 20:53:06.342070   58805 out.go:239] * 
	* 
	W0708 20:53:06.345307   58805 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:53:06.346533   58805 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-071971 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971: exit status 3 (18.51875046s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:53:24.867839   59433 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.163:22: connect: no route to host
	E0708 20:53:24.867859   59433 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.163:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-071971" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-239931 -n embed-certs-239931
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-239931 -n embed-certs-239931: exit status 3 (3.167696716s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:51:35.907806   58998 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	E0708 20:51:35.907840   58998 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-239931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-239931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154309578s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-239931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-239931 -n embed-certs-239931
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-239931 -n embed-certs-239931: exit status 3 (3.061331345s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:51:45.123851   59077 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	E0708 20:51:45.123872   59077 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-239931" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971: exit status 3 (3.167736329s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:53:28.035835   59528 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.163:22: connect: no route to host
	E0708 20:53:28.035856   59528 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.163:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-071971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-071971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153310307s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.163:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-071971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971: exit status 3 (3.062418966s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 20:53:37.251915   59609 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.163:22: connect: no route to host
	E0708 20:53:37.251944   59609 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.163:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-071971" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0708 20:59:23.843583   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0708 21:01:29.733843   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0708 21:04:23.844431   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0708 21:04:32.785566   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0708 21:05:46.896195   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914355 -n old-k8s-version-914355
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 2 (246.849933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-914355" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 2 (231.334043ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-914355 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-914355 logs -n 25: (1.015054008s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-897827                                        | pause-897827                 | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:46 UTC |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| ssh     | cert-options-059722 ssh                                | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-059722 -- sudo                         | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-059722                                 | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-028021             | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-914355             | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-239931            | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-733920 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-733920                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:50 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028021                  | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071971  | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-239931                 | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071971       | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC | 08 Jul 24 21:01 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:53:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:53:37.291760   59655 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:53:37.291847   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.291851   59655 out.go:304] Setting ErrFile to fd 2...
	I0708 20:53:37.291855   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.292047   59655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:53:37.292558   59655 out.go:298] Setting JSON to false
	I0708 20:53:37.293434   59655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5766,"bootTime":1720466251,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:53:37.293485   59655 start.go:139] virtualization: kvm guest
	I0708 20:53:37.296412   59655 out.go:177] * [default-k8s-diff-port-071971] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:53:37.297727   59655 notify.go:220] Checking for updates...
	I0708 20:53:37.297756   59655 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:53:37.299168   59655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:53:37.300541   59655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:53:37.301818   59655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:53:37.303117   59655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:53:37.304266   59655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:53:37.305793   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:53:37.306182   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.306236   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.321719   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I0708 20:53:37.322090   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.322593   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.322617   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.322908   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.323093   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.323329   59655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:53:37.323638   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.323679   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.338244   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0708 20:53:37.338660   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.339118   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.339144   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.339463   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.339735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.374356   59655 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:53:37.375714   59655 start.go:297] selected driver: kvm2
	I0708 20:53:37.375729   59655 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.375866   59655 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:53:37.376843   59655 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.376918   59655 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:53:37.391219   59655 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:53:37.391602   59655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:53:37.391659   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:53:37.391672   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:53:37.391707   59655 start.go:340] cluster config:
	{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.391797   59655 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.393453   59655 out.go:177] * Starting "default-k8s-diff-port-071971" primary control-plane node in "default-k8s-diff-port-071971" cluster
	I0708 20:53:37.923695   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:40.995762   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:37.394736   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:53:37.394768   59655 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 20:53:37.394777   59655 cache.go:56] Caching tarball of preloaded images
	I0708 20:53:37.394849   59655 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:53:37.394860   59655 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 20:53:37.394962   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:53:37.395154   59655 start.go:360] acquireMachinesLock for default-k8s-diff-port-071971: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:53:47.075721   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:50.147727   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:56.227766   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:59.299738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:05.379699   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:08.451749   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:14.531759   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:17.603688   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:23.683730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:26.755738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:32.835706   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:35.907702   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:41.987722   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:45.059873   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:51.139726   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:54.211797   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:00.291730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:03.363720   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:09.443741   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:12.515718   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.358315   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:55:19.358408   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:55:19.359948   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:55:19.360000   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:55:19.360076   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:55:19.360217   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:55:19.360354   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:55:19.360443   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:55:19.362594   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:55:19.362671   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:55:19.362761   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:55:19.362915   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:55:19.362997   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:55:19.363087   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:55:19.363181   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:55:19.363271   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:55:19.363360   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:55:19.363470   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:55:19.363582   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:55:19.363636   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:55:19.363711   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:55:19.363781   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:55:19.363852   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:55:19.363941   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:55:19.364010   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:55:19.364135   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:55:19.364226   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:55:19.364276   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:55:19.364342   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:55:18.595786   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.366132   57466 out.go:204]   - Booting up control plane ...
	I0708 20:55:19.366219   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:55:19.366301   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:55:19.366364   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:55:19.366433   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:55:19.366579   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:55:19.366629   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:55:19.366692   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.366846   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.366909   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367070   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367133   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367285   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367344   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367511   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367575   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367735   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367743   57466 kubeadm.go:309] 
	I0708 20:55:19.367783   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:55:19.367817   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:55:19.367823   57466 kubeadm.go:309] 
	I0708 20:55:19.367851   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:55:19.367888   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:55:19.367991   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:55:19.368009   57466 kubeadm.go:309] 
	I0708 20:55:19.368127   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:55:19.368164   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:55:19.368192   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:55:19.368198   57466 kubeadm.go:309] 
	I0708 20:55:19.368284   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:55:19.368355   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:55:19.368362   57466 kubeadm.go:309] 
	I0708 20:55:19.368455   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:55:19.368539   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:55:19.368606   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:55:19.368666   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:55:19.368673   57466 kubeadm.go:309] 
	W0708 20:55:19.368784   57466 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0708 20:55:19.368831   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 20:55:19.838778   57466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:55:19.853958   57466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:55:19.863986   57466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:55:19.864010   57466 kubeadm.go:156] found existing configuration files:
	
	I0708 20:55:19.864055   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:55:19.873085   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:55:19.873147   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:55:19.882654   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:55:19.891579   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:55:19.891634   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:55:19.901397   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.910901   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:55:19.910976   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.920599   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:55:19.929826   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:55:19.929891   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:55:19.939284   57466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 20:55:20.153136   57466 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 20:55:21.667700   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:27.747756   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:30.819712   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:33.824320   59107 start.go:364] duration metric: took 3m48.54985296s to acquireMachinesLock for "embed-certs-239931"
	I0708 20:55:33.824375   59107 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:33.824390   59107 fix.go:54] fixHost starting: 
	I0708 20:55:33.824700   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:33.824728   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:33.839554   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0708 20:55:33.839987   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:33.840472   59107 main.go:141] libmachine: Using API Version  1
	I0708 20:55:33.840495   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:33.840844   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:33.841030   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:33.841194   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 20:55:33.842597   59107 fix.go:112] recreateIfNeeded on embed-certs-239931: state=Stopped err=<nil>
	I0708 20:55:33.842627   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	W0708 20:55:33.842787   59107 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:33.844574   59107 out.go:177] * Restarting existing kvm2 VM for "embed-certs-239931" ...
	I0708 20:55:33.845674   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Start
	I0708 20:55:33.845858   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring networks are active...
	I0708 20:55:33.846607   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network default is active
	I0708 20:55:33.846907   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network mk-embed-certs-239931 is active
	I0708 20:55:33.847329   59107 main.go:141] libmachine: (embed-certs-239931) Getting domain xml...
	I0708 20:55:33.848120   59107 main.go:141] libmachine: (embed-certs-239931) Creating domain...
	I0708 20:55:35.057523   59107 main.go:141] libmachine: (embed-certs-239931) Waiting to get IP...
	I0708 20:55:35.058300   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.058841   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.058870   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.058773   60083 retry.go:31] will retry after 280.969113ms: waiting for machine to come up
	I0708 20:55:33.821580   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:33.821617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.821932   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:55:33.821957   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.822166   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:55:33.824193   58678 machine.go:97] duration metric: took 4m37.421469682s to provisionDockerMachine
	I0708 20:55:33.824234   58678 fix.go:56] duration metric: took 4m37.444794791s for fixHost
	I0708 20:55:33.824241   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 4m37.44481517s
	W0708 20:55:33.824262   58678 start.go:713] error starting host: provision: host is not running
	W0708 20:55:33.824343   58678 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0708 20:55:33.824352   58678 start.go:728] Will try again in 5 seconds ...
	I0708 20:55:35.341327   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.341861   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.341882   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.341837   60083 retry.go:31] will retry after 333.972717ms: waiting for machine to come up
	I0708 20:55:35.677531   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.678035   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.678066   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.677984   60083 retry.go:31] will retry after 387.46652ms: waiting for machine to come up
	I0708 20:55:36.066618   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.067079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.067106   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.067044   60083 retry.go:31] will retry after 523.369614ms: waiting for machine to come up
	I0708 20:55:36.591863   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.592337   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.592363   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.592295   60083 retry.go:31] will retry after 670.675561ms: waiting for machine to come up
	I0708 20:55:37.264084   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:37.264521   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:37.264565   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:37.264485   60083 retry.go:31] will retry after 775.348922ms: waiting for machine to come up
	I0708 20:55:38.041398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:38.041860   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:38.041885   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:38.041801   60083 retry.go:31] will retry after 1.135585711s: waiting for machine to come up
	I0708 20:55:39.179405   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:39.179910   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:39.179938   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:39.179867   60083 retry.go:31] will retry after 1.422689354s: waiting for machine to come up
	I0708 20:55:38.826037   58678 start.go:360] acquireMachinesLock for no-preload-028021: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:55:40.603811   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:40.604240   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:40.604261   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:40.604199   60083 retry.go:31] will retry after 1.640612147s: waiting for machine to come up
	I0708 20:55:42.247230   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:42.247797   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:42.247837   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:42.247733   60083 retry.go:31] will retry after 2.031069729s: waiting for machine to come up
	I0708 20:55:44.280878   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:44.281419   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:44.281451   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:44.281355   60083 retry.go:31] will retry after 2.394813785s: waiting for machine to come up
	I0708 20:55:46.678897   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:46.679398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:46.679430   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:46.679357   60083 retry.go:31] will retry after 2.419242459s: waiting for machine to come up
	I0708 20:55:49.100362   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:49.100901   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:49.100964   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:49.100858   60083 retry.go:31] will retry after 4.241202363s: waiting for machine to come up
	I0708 20:55:54.868873   59655 start.go:364] duration metric: took 2m17.473689428s to acquireMachinesLock for "default-k8s-diff-port-071971"
	I0708 20:55:54.868970   59655 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:54.868991   59655 fix.go:54] fixHost starting: 
	I0708 20:55:54.869400   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:54.869432   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:54.888853   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44159
	I0708 20:55:54.889234   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:54.889674   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:55:54.889698   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:54.890009   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:54.890196   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:55:54.890332   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 20:55:54.891932   59655 fix.go:112] recreateIfNeeded on default-k8s-diff-port-071971: state=Stopped err=<nil>
	I0708 20:55:54.891972   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	W0708 20:55:54.892120   59655 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:54.894439   59655 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071971" ...
	I0708 20:55:53.347154   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.347587   59107 main.go:141] libmachine: (embed-certs-239931) Found IP for machine: 192.168.61.126
	I0708 20:55:53.347601   59107 main.go:141] libmachine: (embed-certs-239931) Reserving static IP address...
	I0708 20:55:53.347612   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has current primary IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.348084   59107 main.go:141] libmachine: (embed-certs-239931) Reserved static IP address: 192.168.61.126
	I0708 20:55:53.348106   59107 main.go:141] libmachine: (embed-certs-239931) Waiting for SSH to be available...
	I0708 20:55:53.348119   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.348136   59107 main.go:141] libmachine: (embed-certs-239931) DBG | skip adding static IP to network mk-embed-certs-239931 - found existing host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"}
	I0708 20:55:53.348148   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Getting to WaitForSSH function...
	I0708 20:55:53.350167   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350545   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.350583   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350651   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH client type: external
	I0708 20:55:53.350675   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa (-rw-------)
	I0708 20:55:53.350704   59107 main.go:141] libmachine: (embed-certs-239931) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:55:53.350722   59107 main.go:141] libmachine: (embed-certs-239931) DBG | About to run SSH command:
	I0708 20:55:53.350736   59107 main.go:141] libmachine: (embed-certs-239931) DBG | exit 0
	I0708 20:55:53.479934   59107 main.go:141] libmachine: (embed-certs-239931) DBG | SSH cmd err, output: <nil>: 
	I0708 20:55:53.480309   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetConfigRaw
	I0708 20:55:53.480891   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.483079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483399   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.483424   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483740   59107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/config.json ...
	I0708 20:55:53.483920   59107 machine.go:94] provisionDockerMachine start ...
	I0708 20:55:53.483937   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:53.484172   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.486461   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486772   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.486793   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.487075   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487253   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487385   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.487556   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.487774   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.487786   59107 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:55:53.600047   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:55:53.600078   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600308   59107 buildroot.go:166] provisioning hostname "embed-certs-239931"
	I0708 20:55:53.600342   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600508   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.603107   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603498   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.603529   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603728   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.603954   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604345   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.604512   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.604716   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.604737   59107 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-239931 && echo "embed-certs-239931" | sudo tee /etc/hostname
	I0708 20:55:53.734414   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-239931
	
	I0708 20:55:53.734457   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.737117   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737473   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.737501   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737640   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.737852   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738184   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.738355   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.738536   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.738558   59107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-239931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-239931/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-239931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:55:53.860753   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:53.860781   59107 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:55:53.860799   59107 buildroot.go:174] setting up certificates
	I0708 20:55:53.860808   59107 provision.go:84] configureAuth start
	I0708 20:55:53.860816   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.861070   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.863652   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.863999   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.864018   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.864221   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.866207   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866480   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.866504   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866613   59107 provision.go:143] copyHostCerts
	I0708 20:55:53.866671   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:55:53.866680   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:55:53.866741   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:55:53.866837   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:55:53.866845   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:55:53.866868   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:55:53.866932   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:55:53.866939   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:55:53.866959   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:55:53.867017   59107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.embed-certs-239931 san=[127.0.0.1 192.168.61.126 embed-certs-239931 localhost minikube]
	I0708 20:55:54.171765   59107 provision.go:177] copyRemoteCerts
	I0708 20:55:54.171835   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:55:54.171859   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.174341   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174621   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.174650   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174762   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.174957   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.175129   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.175280   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.262051   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:55:54.287118   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:55:54.310071   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:55:54.337811   59107 provision.go:87] duration metric: took 476.990356ms to configureAuth
	I0708 20:55:54.337851   59107 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:55:54.338077   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:55:54.338147   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.340972   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341259   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.341296   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341423   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.341720   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.341870   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.342006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.342147   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.342332   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.342350   59107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:55:54.618752   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:55:54.618775   59107 machine.go:97] duration metric: took 1.134844127s to provisionDockerMachine
	I0708 20:55:54.618786   59107 start.go:293] postStartSetup for "embed-certs-239931" (driver="kvm2")
	I0708 20:55:54.618795   59107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:55:54.618823   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.619220   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:55:54.619249   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.621857   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622144   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.622168   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622348   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.622532   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.622703   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.622853   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.710096   59107 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:55:54.714437   59107 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:55:54.714458   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:55:54.714524   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:55:54.714597   59107 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:55:54.714679   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:55:54.724350   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:55:54.748078   59107 start.go:296] duration metric: took 129.264358ms for postStartSetup
	I0708 20:55:54.748120   59107 fix.go:56] duration metric: took 20.923736253s for fixHost
	I0708 20:55:54.748138   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.750818   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751200   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.751227   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751377   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.751611   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751763   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751879   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.752034   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.752240   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.752256   59107 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:55:54.868663   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472154.844724958
	
	I0708 20:55:54.868694   59107 fix.go:216] guest clock: 1720472154.844724958
	I0708 20:55:54.868706   59107 fix.go:229] Guest: 2024-07-08 20:55:54.844724958 +0000 UTC Remote: 2024-07-08 20:55:54.748123056 +0000 UTC m=+249.617599643 (delta=96.601902ms)
	I0708 20:55:54.868764   59107 fix.go:200] guest clock delta is within tolerance: 96.601902ms
	I0708 20:55:54.868776   59107 start.go:83] releasing machines lock for "embed-certs-239931", held for 21.044425411s
	I0708 20:55:54.868811   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.869092   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:54.871867   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872252   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.872295   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872451   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.872921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873060   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873151   59107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:55:54.873196   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.873271   59107 ssh_runner.go:195] Run: cat /version.json
	I0708 20:55:54.873297   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.876118   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876142   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876614   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876641   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876682   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876699   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.876903   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.877017   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877193   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877266   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877349   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.877424   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.984516   59107 ssh_runner.go:195] Run: systemctl --version
	I0708 20:55:54.990926   59107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:55:55.142010   59107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:55:55.148138   59107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:55:55.148203   59107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:55:55.164086   59107 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:55:55.164111   59107 start.go:494] detecting cgroup driver to use...
	I0708 20:55:55.164204   59107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:55:55.184836   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:55:55.204002   59107 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:55:55.204079   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:55:55.218405   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:55:55.233462   59107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:55:55.357278   59107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:55:55.521141   59107 docker.go:233] disabling docker service ...
	I0708 20:55:55.521218   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:55:55.538949   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:55:55.558613   59107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:55:55.696926   59107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:55:55.819810   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:55:55.837012   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:55:55.856417   59107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:55:55.856497   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.868488   59107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:55:55.868556   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.879503   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.891183   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.901872   59107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:55:55.914498   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.925676   59107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.944340   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.955961   59107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:55:55.965785   59107 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:55:55.965845   59107 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:55:55.979853   59107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:55:55.989331   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:55:56.108476   59107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:55:56.262396   59107 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:55:56.262463   59107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:55:56.267682   59107 start.go:562] Will wait 60s for crictl version
	I0708 20:55:56.267740   59107 ssh_runner.go:195] Run: which crictl
	I0708 20:55:56.273115   59107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:55:56.323276   59107 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:55:56.323364   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.352650   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.394502   59107 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:55:54.895951   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Start
	I0708 20:55:54.896150   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring networks are active...
	I0708 20:55:54.896971   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network default is active
	I0708 20:55:54.897344   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network mk-default-k8s-diff-port-071971 is active
	I0708 20:55:54.897672   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Getting domain xml...
	I0708 20:55:54.898340   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Creating domain...
	I0708 20:55:56.182187   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting to get IP...
	I0708 20:55:56.183209   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183699   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183759   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.183663   60221 retry.go:31] will retry after 255.382138ms: waiting for machine to come up
	I0708 20:55:56.441290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441760   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441789   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.441718   60221 retry.go:31] will retry after 363.116234ms: waiting for machine to come up
	I0708 20:55:56.806418   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806871   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806899   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.806819   60221 retry.go:31] will retry after 392.319836ms: waiting for machine to come up
	I0708 20:55:57.200645   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201176   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.201095   60221 retry.go:31] will retry after 528.490844ms: waiting for machine to come up
	I0708 20:55:56.395778   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:56.398458   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.398826   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:56.398853   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.399088   59107 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0708 20:55:56.403789   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:55:56.418081   59107 kubeadm.go:877] updating cluster {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:55:56.418244   59107 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:55:56.418312   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:55:56.459969   59107 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:55:56.460034   59107 ssh_runner.go:195] Run: which lz4
	I0708 20:55:56.464561   59107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:55:56.469087   59107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:55:56.469130   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:55:58.010716   59107 crio.go:462] duration metric: took 1.546186223s to copy over tarball
	I0708 20:55:58.010782   59107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:55:57.731640   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732172   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732223   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.732129   60221 retry.go:31] will retry after 554.611559ms: waiting for machine to come up
	I0708 20:55:58.287924   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288557   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.288491   60221 retry.go:31] will retry after 642.466107ms: waiting for machine to come up
	I0708 20:55:58.932485   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933032   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.932958   60221 retry.go:31] will retry after 999.83146ms: waiting for machine to come up
	I0708 20:55:59.934050   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934618   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934664   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:59.934571   60221 retry.go:31] will retry after 1.069868254s: waiting for machine to come up
	I0708 20:56:01.006049   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006594   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:01.006506   60221 retry.go:31] will retry after 1.182777891s: waiting for machine to come up
	I0708 20:56:02.191001   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191460   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191483   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:02.191418   60221 retry.go:31] will retry after 1.559742627s: waiting for machine to come up
	I0708 20:56:00.267199   59107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256392679s)
	I0708 20:56:00.267230   59107 crio.go:469] duration metric: took 2.256489175s to extract the tarball
	I0708 20:56:00.267240   59107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:00.305692   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:00.346669   59107 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:00.346694   59107 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:00.346703   59107 kubeadm.go:928] updating node { 192.168.61.126 8443 v1.30.2 crio true true} ...
	I0708 20:56:00.346804   59107 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-239931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:00.346868   59107 ssh_runner.go:195] Run: crio config
	I0708 20:56:00.392577   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:00.392597   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:00.392608   59107 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:00.392637   59107 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-239931 NodeName:embed-certs-239931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:00.392814   59107 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-239931"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:00.392894   59107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:00.403593   59107 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:00.403675   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:00.413449   59107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0708 20:56:00.430407   59107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:00.447599   59107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0708 20:56:00.465525   59107 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:00.469912   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:00.483255   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:00.623802   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:00.642946   59107 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931 for IP: 192.168.61.126
	I0708 20:56:00.642967   59107 certs.go:194] generating shared ca certs ...
	I0708 20:56:00.642982   59107 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:00.643143   59107 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:00.643184   59107 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:00.643193   59107 certs.go:256] generating profile certs ...
	I0708 20:56:00.643270   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/client.key
	I0708 20:56:00.643317   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key.7743ab88
	I0708 20:56:00.643354   59107 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key
	I0708 20:56:00.643487   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:00.643524   59107 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:00.643533   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:00.643556   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:00.643579   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:00.643604   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:00.643670   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:00.644353   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:00.699260   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:00.752536   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:00.783946   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:00.812524   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0708 20:56:00.843035   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:00.872061   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:00.898805   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 20:56:00.925402   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:00.952114   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:00.984067   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:01.010037   59107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:01.027599   59107 ssh_runner.go:195] Run: openssl version
	I0708 20:56:01.033942   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:01.046273   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051807   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051887   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.058482   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:01.070774   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:01.083215   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088389   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088460   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.094594   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:01.107360   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:01.119973   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125011   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125085   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.131596   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:01.143993   59107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:01.149299   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:01.156201   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:01.162939   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:01.169874   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:01.176264   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:01.182905   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:01.189961   59107 kubeadm.go:391] StartCluster: {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:01.190041   59107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:01.190085   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.238097   59107 cri.go:89] found id: ""
	I0708 20:56:01.238167   59107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:01.250478   59107 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:01.250503   59107 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:01.250509   59107 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:01.250562   59107 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:01.261664   59107 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:01.262667   59107 kubeconfig.go:125] found "embed-certs-239931" server: "https://192.168.61.126:8443"
	I0708 20:56:01.264788   59107 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:01.275846   59107 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.126
	I0708 20:56:01.275888   59107 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:01.275908   59107 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:01.276006   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.318646   59107 cri.go:89] found id: ""
	I0708 20:56:01.318745   59107 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:01.340273   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:01.353325   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:01.353360   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:01.353412   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:01.363659   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:01.363732   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:01.374340   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:01.384284   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:01.384352   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:01.394981   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.405532   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:01.405604   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.416741   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:01.427724   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:01.427812   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:01.439352   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:01.451286   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:01.581829   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.013995   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.432133224s)
	I0708 20:56:03.014024   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.229195   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.305328   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.415409   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:03.415508   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:03.916187   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.416389   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.489450   59107 api_server.go:72] duration metric: took 1.074041899s to wait for apiserver process to appear ...
	I0708 20:56:04.489482   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:04.489516   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:04.490133   59107 api_server.go:269] stopped: https://192.168.61.126:8443/healthz: Get "https://192.168.61.126:8443/healthz": dial tcp 192.168.61.126:8443: connect: connection refused
	I0708 20:56:04.989698   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:03.753446   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.753998   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.754026   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:03.753954   60221 retry.go:31] will retry after 1.922949894s: waiting for machine to come up
	I0708 20:56:05.679244   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679831   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679860   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:05.679794   60221 retry.go:31] will retry after 3.531627288s: waiting for machine to come up
	I0708 20:56:07.673375   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:07.673404   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:07.673420   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.776516   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.776551   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:07.989668   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.996865   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.996897   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.490538   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:08.496342   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:08.496374   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.990583   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:09.001043   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 20:56:09.011126   59107 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:09.011160   59107 api_server.go:131] duration metric: took 4.521668725s to wait for apiserver health ...
	I0708 20:56:09.011171   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:09.011179   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:09.012842   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:09.014197   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:09.041325   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:09.073110   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:09.086225   59107 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:09.086265   59107 system_pods.go:61] "coredns-7db6d8ff4d-wnqsl" [868e66bf-9f86-465f-aad1-d11a6d218ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:09.086276   59107 system_pods.go:61] "etcd-embed-certs-239931" [48815314-6e48-4fe0-b7b1-4a1d2f6610d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:09.086286   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [665311f4-d633-4b93-ae8c-2b43b45fff68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:09.086294   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [4ab6d657-8c74-491c-b965-ac68f2bd323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:09.086309   59107 system_pods.go:61] "kube-proxy-5h5xl" [9b169148-aa75-40a2-b08b-1d579ee179b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 20:56:09.086316   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [012399d8-10a4-407d-a899-3c840dd52ca8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:09.086331   59107 system_pods.go:61] "metrics-server-569cc877fc-h4btg" [c78cfc3c-159f-4a06-b4a0-63f8bd0a6703] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:09.086339   59107 system_pods.go:61] "storage-provisioner" [2ca0ea1d-5d1c-4e18-a871-e035a8946b3c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 20:56:09.086348   59107 system_pods.go:74] duration metric: took 13.216051ms to wait for pod list to return data ...
	I0708 20:56:09.086363   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:09.089689   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:09.089719   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:09.089732   59107 node_conditions.go:105] duration metric: took 3.363611ms to run NodePressure ...
	I0708 20:56:09.089751   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:09.377271   59107 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383148   59107 kubeadm.go:733] kubelet initialised
	I0708 20:56:09.383174   59107 kubeadm.go:734] duration metric: took 5.876526ms waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383183   59107 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:09.388903   59107 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:09.214856   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215410   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:09.215355   60221 retry.go:31] will retry after 3.64169465s: waiting for machine to come up
	I0708 20:56:14.180834   58678 start.go:364] duration metric: took 35.354748041s to acquireMachinesLock for "no-preload-028021"
	I0708 20:56:14.180893   58678 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:56:14.180905   58678 fix.go:54] fixHost starting: 
	I0708 20:56:14.181259   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:56:14.181299   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:56:14.197712   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I0708 20:56:14.198157   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:56:14.198615   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:56:14.198637   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:56:14.198996   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:56:14.199173   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:14.199342   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:56:14.200905   58678 fix.go:112] recreateIfNeeded on no-preload-028021: state=Stopped err=<nil>
	I0708 20:56:14.200930   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	W0708 20:56:14.201103   58678 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:56:14.203062   58678 out.go:177] * Restarting existing kvm2 VM for "no-preload-028021" ...
	I0708 20:56:11.396453   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:13.396672   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:12.860535   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.860988   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Found IP for machine: 192.168.72.163
	I0708 20:56:12.861010   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserving static IP address...
	I0708 20:56:12.861027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has current primary IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.861445   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.861473   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserved static IP address: 192.168.72.163
	I0708 20:56:12.861494   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | skip adding static IP to network mk-default-k8s-diff-port-071971 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"}
	I0708 20:56:12.861515   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Getting to WaitForSSH function...
	I0708 20:56:12.861531   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for SSH to be available...
	I0708 20:56:12.864099   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864436   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.864465   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864631   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH client type: external
	I0708 20:56:12.864663   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa (-rw-------)
	I0708 20:56:12.864693   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:12.864708   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | About to run SSH command:
	I0708 20:56:12.864721   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | exit 0
	I0708 20:56:12.996077   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:12.996459   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetConfigRaw
	I0708 20:56:12.997091   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:12.999431   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.999815   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.999844   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.000145   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:56:13.000354   59655 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:13.000377   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.000558   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.002898   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003255   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.003290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003444   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.003626   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003778   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003930   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.004094   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.004297   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.004311   59655 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:13.119929   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:13.119956   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120203   59655 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071971"
	I0708 20:56:13.120320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120544   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.123750   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.124256   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.124647   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124993   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.125155   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.125339   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.125360   59655 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071971 && echo "default-k8s-diff-port-071971" | sudo tee /etc/hostname
	I0708 20:56:13.256165   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071971
	
	I0708 20:56:13.256199   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.258991   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259342   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.259376   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.259828   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260011   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260149   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.260325   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.260506   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.260530   59655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071971/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:13.381593   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:13.381627   59655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:13.381684   59655 buildroot.go:174] setting up certificates
	I0708 20:56:13.381700   59655 provision.go:84] configureAuth start
	I0708 20:56:13.381716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.382023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:13.385065   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385358   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.385394   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385566   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.387752   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388092   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.388132   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388290   59655 provision.go:143] copyHostCerts
	I0708 20:56:13.388350   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:13.388361   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:13.388408   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:13.388506   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:13.388516   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:13.388536   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:13.388587   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:13.388593   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:13.388610   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:13.389123   59655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071971 san=[127.0.0.1 192.168.72.163 default-k8s-diff-port-071971 localhost minikube]
	I0708 20:56:13.445451   59655 provision.go:177] copyRemoteCerts
	I0708 20:56:13.445509   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:13.445536   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.448926   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449291   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.449320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449579   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.449785   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.449944   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.450097   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:13.542311   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0708 20:56:13.570585   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 20:56:13.597943   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:13.623837   59655 provision.go:87] duration metric: took 242.102893ms to configureAuth
	I0708 20:56:13.623874   59655 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:13.624077   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:13.624144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.626802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627247   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.627277   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627553   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.627738   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.627910   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.628047   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.628214   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.628414   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.628442   59655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:13.930321   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:13.930349   59655 machine.go:97] duration metric: took 929.979999ms to provisionDockerMachine
	I0708 20:56:13.930361   59655 start.go:293] postStartSetup for "default-k8s-diff-port-071971" (driver="kvm2")
	I0708 20:56:13.930371   59655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:13.930385   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.930714   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:13.930747   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.933397   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933704   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.933735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933927   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.934119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.934266   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.934393   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.019603   59655 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:14.024556   59655 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:14.024589   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:14.024651   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:14.024744   59655 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:14.024836   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:14.035798   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:14.062351   59655 start.go:296] duration metric: took 131.974167ms for postStartSetup
	I0708 20:56:14.062402   59655 fix.go:56] duration metric: took 19.193418124s for fixHost
	I0708 20:56:14.062428   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.065264   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.065784   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.065822   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.066027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.066271   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.066965   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:14.067197   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:14.067210   59655 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:14.180654   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472174.151879540
	
	I0708 20:56:14.180683   59655 fix.go:216] guest clock: 1720472174.151879540
	I0708 20:56:14.180695   59655 fix.go:229] Guest: 2024-07-08 20:56:14.15187954 +0000 UTC Remote: 2024-07-08 20:56:14.062408788 +0000 UTC m=+156.804206336 (delta=89.470752ms)
	I0708 20:56:14.180751   59655 fix.go:200] guest clock delta is within tolerance: 89.470752ms
	I0708 20:56:14.180757   59655 start.go:83] releasing machines lock for "default-k8s-diff-port-071971", held for 19.311816598s
	I0708 20:56:14.180802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.181119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:14.183833   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184164   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.184194   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184365   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.184862   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185029   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185105   59655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:14.185152   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.185222   59655 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:14.185248   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.187788   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188135   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188167   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188299   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188328   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188501   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188505   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188641   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.188715   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188803   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.188885   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.189022   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.298253   59655 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:14.305004   59655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:14.457540   59655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:14.464497   59655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:14.464567   59655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:14.482063   59655 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:14.482093   59655 start.go:494] detecting cgroup driver to use...
	I0708 20:56:14.482172   59655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:14.500206   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:14.515905   59655 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:14.515952   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:14.532277   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:14.552772   59655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:14.686229   59655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:14.845428   59655 docker.go:233] disabling docker service ...
	I0708 20:56:14.845496   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:14.863157   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:14.881174   59655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:15.029269   59655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:15.165105   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:15.181619   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:15.202743   59655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:15.202806   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.215848   59655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:15.215925   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.228697   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.240964   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.257002   59655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:15.270309   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.283215   59655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.303235   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.322364   59655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:15.340757   59655 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:15.340836   59655 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:15.360592   59655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:15.372486   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:15.510210   59655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:15.656090   59655 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:15.656169   59655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:15.661847   59655 start.go:562] Will wait 60s for crictl version
	I0708 20:56:15.661917   59655 ssh_runner.go:195] Run: which crictl
	I0708 20:56:15.666004   59655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:15.707842   59655 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:15.707928   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.740434   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.772450   59655 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:14.204596   58678 main.go:141] libmachine: (no-preload-028021) Calling .Start
	I0708 20:56:14.204780   58678 main.go:141] libmachine: (no-preload-028021) Ensuring networks are active...
	I0708 20:56:14.205463   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network default is active
	I0708 20:56:14.205799   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network mk-no-preload-028021 is active
	I0708 20:56:14.206280   58678 main.go:141] libmachine: (no-preload-028021) Getting domain xml...
	I0708 20:56:14.207187   58678 main.go:141] libmachine: (no-preload-028021) Creating domain...
	I0708 20:56:15.514100   58678 main.go:141] libmachine: (no-preload-028021) Waiting to get IP...
	I0708 20:56:15.514946   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.515419   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.515473   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.515397   60369 retry.go:31] will retry after 282.59763ms: waiting for machine to come up
	I0708 20:56:15.799976   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.800525   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.800555   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.800482   60369 retry.go:31] will retry after 377.094067ms: waiting for machine to come up
	I0708 20:56:16.179257   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.179953   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.179979   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.179861   60369 retry.go:31] will retry after 433.953923ms: waiting for machine to come up
	I0708 20:56:15.773711   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:15.776947   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777368   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:15.777400   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777704   59655 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:15.782466   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:15.796924   59655 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:15.797072   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:15.797138   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:15.841838   59655 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:15.841922   59655 ssh_runner.go:195] Run: which lz4
	I0708 20:56:15.846443   59655 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:56:15.851267   59655 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:56:15.851302   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:56:15.397039   59107 pod_ready.go:92] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:15.397070   59107 pod_ready.go:81] duration metric: took 6.008141421s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:15.397082   59107 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405606   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.405638   59107 pod_ready.go:81] duration metric: took 2.008547358s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405653   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411786   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.411810   59107 pod_ready.go:81] duration metric: took 6.14625ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411822   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421681   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.421712   59107 pod_ready.go:81] duration metric: took 2.009879259s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421725   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428235   59107 pod_ready.go:92] pod "kube-proxy-5h5xl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.428260   59107 pod_ready.go:81] duration metric: took 6.527896ms for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428269   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433130   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.433154   59107 pod_ready.go:81] duration metric: took 4.87807ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433163   59107 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:16.615670   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.616225   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.616257   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.616177   60369 retry.go:31] will retry after 489.658115ms: waiting for machine to come up
	I0708 20:56:17.107848   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.108391   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.108420   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.108341   60369 retry.go:31] will retry after 620.239043ms: waiting for machine to come up
	I0708 20:56:17.730239   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.730822   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.730854   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.730758   60369 retry.go:31] will retry after 818.379867ms: waiting for machine to come up
	I0708 20:56:18.550539   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:18.551024   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:18.551049   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:18.550993   60369 retry.go:31] will retry after 1.138596155s: waiting for machine to come up
	I0708 20:56:19.691669   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:19.692214   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:19.692267   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:19.692149   60369 retry.go:31] will retry after 1.467771065s: waiting for machine to come up
	I0708 20:56:21.161367   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:21.161916   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:21.161945   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:21.161854   60369 retry.go:31] will retry after 1.592022559s: waiting for machine to come up
	I0708 20:56:17.447251   59655 crio.go:462] duration metric: took 1.600850063s to copy over tarball
	I0708 20:56:17.447341   59655 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:56:19.773249   59655 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.325874804s)
	I0708 20:56:19.773277   59655 crio.go:469] duration metric: took 2.325993304s to extract the tarball
	I0708 20:56:19.773286   59655 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:19.811911   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:19.859029   59655 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:19.859060   59655 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:19.859070   59655 kubeadm.go:928] updating node { 192.168.72.163 8444 v1.30.2 crio true true} ...
	I0708 20:56:19.859208   59655 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-071971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:19.859281   59655 ssh_runner.go:195] Run: crio config
	I0708 20:56:19.905778   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:19.905806   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:19.905822   59655 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:19.905847   59655 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.163 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071971 NodeName:default-k8s-diff-port-071971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:19.906035   59655 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.163
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071971"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:19.906113   59655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:19.916307   59655 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:19.916388   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:19.926496   59655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0708 20:56:19.947778   59655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:19.969466   59655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0708 20:56:19.991103   59655 ssh_runner.go:195] Run: grep 192.168.72.163	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:19.995180   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:20.008005   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:20.143869   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:20.162694   59655 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971 for IP: 192.168.72.163
	I0708 20:56:20.162713   59655 certs.go:194] generating shared ca certs ...
	I0708 20:56:20.162745   59655 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:20.162930   59655 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:20.162986   59655 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:20.162997   59655 certs.go:256] generating profile certs ...
	I0708 20:56:20.163097   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.key
	I0708 20:56:20.163220   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key.17bd30e8
	I0708 20:56:20.163259   59655 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key
	I0708 20:56:20.163394   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:20.163478   59655 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:20.163493   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:20.163524   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:20.163558   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:20.163594   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:20.163659   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:20.164318   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:20.198987   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:20.251872   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:20.281444   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:20.305751   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0708 20:56:20.332608   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 20:56:20.365206   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:20.399631   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:20.430016   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:20.462126   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:20.492669   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:20.521867   59655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:20.540725   59655 ssh_runner.go:195] Run: openssl version
	I0708 20:56:20.546789   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:20.558515   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563342   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563430   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.570039   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:20.585367   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:20.601217   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605930   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605993   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.612015   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:20.623796   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:20.635305   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640571   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640649   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.648600   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:20.663899   59655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:20.669383   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:20.675967   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:20.682513   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:20.690280   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:20.698720   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:20.705678   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:20.712524   59655 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:20.712643   59655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:20.712700   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.761032   59655 cri.go:89] found id: ""
	I0708 20:56:20.761107   59655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:20.772712   59655 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:20.772736   59655 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:20.772742   59655 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:20.772793   59655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:20.784860   59655 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:20.785974   59655 kubeconfig.go:125] found "default-k8s-diff-port-071971" server: "https://192.168.72.163:8444"
	I0708 20:56:20.788290   59655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:20.799889   59655 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.163
	I0708 20:56:20.799919   59655 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:20.799947   59655 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:20.800011   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.846864   59655 cri.go:89] found id: ""
	I0708 20:56:20.846936   59655 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:20.865883   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:20.877476   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:20.877495   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:20.877548   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 20:56:20.889786   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:20.889853   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:20.902185   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 20:56:20.913510   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:20.913573   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:20.923964   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.934048   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:20.934131   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.945078   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 20:56:20.955290   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:20.955354   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:20.966182   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:20.977508   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.319213   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.511204   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:23.942367   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:22.755738   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:22.756182   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:22.756243   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:22.756167   60369 retry.go:31] will retry after 1.858003233s: waiting for machine to come up
	I0708 20:56:24.616152   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:24.616674   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:24.616703   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:24.616618   60369 retry.go:31] will retry after 2.203640369s: waiting for machine to come up
	I0708 20:56:22.471504   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.152252924s)
	I0708 20:56:22.471539   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.692407   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.756884   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.892773   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:22.892888   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.393789   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.893298   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.941073   59655 api_server.go:72] duration metric: took 1.048301169s to wait for apiserver process to appear ...
	I0708 20:56:23.941100   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:23.941131   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.221991   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:27.222029   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:27.222048   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:26.441670   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:28.939138   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:27.353017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.353069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.442130   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.447304   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.447326   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.941979   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.951850   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.951878   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.441380   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.452031   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.452069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.941613   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.946045   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.946084   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.441485   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.448847   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.448877   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.941906   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.946380   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.946416   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:30.442013   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:30.447291   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 20:56:30.454664   59655 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:30.454693   59655 api_server.go:131] duration metric: took 6.513586414s to wait for apiserver health ...
	I0708 20:56:30.454701   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:30.454707   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:30.456577   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:26.821665   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:26.822266   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:26.822297   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:26.822209   60369 retry.go:31] will retry after 3.478824168s: waiting for machine to come up
	I0708 20:56:30.302329   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:30.302766   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:30.302796   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:30.302707   60369 retry.go:31] will retry after 3.597512692s: waiting for machine to come up
	I0708 20:56:30.458168   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:30.469918   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:30.492348   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:30.503174   59655 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:30.503210   59655 system_pods.go:61] "coredns-7db6d8ff4d-c4tzw" [e5ea7dde-1134-45d0-b3e2-176e6a8f068e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:30.503218   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [693fd668-83c2-43e6-bf43-7b1a9e654db0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:30.503226   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [eadde33a-b967-4a58-9730-d163e6b8c0c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:30.503233   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [99bd8e55-e0a9-4071-a0f0-dc9d1e79b58d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:30.503238   59655 system_pods.go:61] "kube-proxy-vq4l8" [e2a4779c-e8ed-4f5b-872b-d10604936178] Running
	I0708 20:56:30.503244   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [af6b0a79-be1e-4caa-86a6-47ac782ac438] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:30.503250   59655 system_pods.go:61] "metrics-server-569cc877fc-h2dzd" [7075aa8e-0716-4965-8a13-3ed804190b3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:30.503257   59655 system_pods.go:61] "storage-provisioner" [9fca5ac9-cd65-4257-b012-20ded80a39a5] Running
	I0708 20:56:30.503265   59655 system_pods.go:74] duration metric: took 10.887672ms to wait for pod list to return data ...
	I0708 20:56:30.503279   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:30.509137   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:30.509170   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:30.509189   59655 node_conditions.go:105] duration metric: took 5.903588ms to run NodePressure ...
	I0708 20:56:30.509210   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:30.780430   59655 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788138   59655 kubeadm.go:733] kubelet initialised
	I0708 20:56:30.788168   59655 kubeadm.go:734] duration metric: took 7.711132ms waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788177   59655 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:30.797001   59655 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:30.939824   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:32.940860   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:34.941652   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:33.901849   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902332   58678 main.go:141] libmachine: (no-preload-028021) Found IP for machine: 192.168.39.108
	I0708 20:56:33.902356   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has current primary IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902361   58678 main.go:141] libmachine: (no-preload-028021) Reserving static IP address...
	I0708 20:56:33.902766   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.902797   58678 main.go:141] libmachine: (no-preload-028021) DBG | skip adding static IP to network mk-no-preload-028021 - found existing host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"}
	I0708 20:56:33.902808   58678 main.go:141] libmachine: (no-preload-028021) Reserved static IP address: 192.168.39.108
	I0708 20:56:33.902825   58678 main.go:141] libmachine: (no-preload-028021) Waiting for SSH to be available...
	I0708 20:56:33.902835   58678 main.go:141] libmachine: (no-preload-028021) DBG | Getting to WaitForSSH function...
	I0708 20:56:33.905031   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905318   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.905341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905479   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH client type: external
	I0708 20:56:33.905509   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa (-rw-------)
	I0708 20:56:33.905535   58678 main.go:141] libmachine: (no-preload-028021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:33.905560   58678 main.go:141] libmachine: (no-preload-028021) DBG | About to run SSH command:
	I0708 20:56:33.905573   58678 main.go:141] libmachine: (no-preload-028021) DBG | exit 0
	I0708 20:56:34.035510   58678 main.go:141] libmachine: (no-preload-028021) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:34.035876   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetConfigRaw
	I0708 20:56:34.036501   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.039070   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039467   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.039496   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039711   58678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/config.json ...
	I0708 20:56:34.039936   58678 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:34.039955   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:34.040191   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.042269   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042640   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.042666   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042793   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.042954   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043125   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043292   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.043496   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.043662   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.043671   58678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:34.156092   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:34.156143   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156412   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:56:34.156441   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156639   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.159015   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159420   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.159467   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159606   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.159817   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160015   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160214   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.160407   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.160572   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.160584   58678 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-028021 && echo "no-preload-028021" | sudo tee /etc/hostname
	I0708 20:56:34.286222   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-028021
	
	I0708 20:56:34.286250   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.289067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289376   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.289399   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.289832   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.289991   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.290129   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.290310   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.290471   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.290485   58678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-028021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-028021/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-028021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:34.414724   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:34.414749   58678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:34.414790   58678 buildroot.go:174] setting up certificates
	I0708 20:56:34.414799   58678 provision.go:84] configureAuth start
	I0708 20:56:34.414808   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.415115   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.417919   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418241   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.418273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418491   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.421129   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421603   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.421634   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421756   58678 provision.go:143] copyHostCerts
	I0708 20:56:34.421818   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:34.421839   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:34.421906   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:34.422023   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:34.422034   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:34.422064   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:34.422151   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:34.422161   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:34.422196   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:34.422276   58678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.no-preload-028021 san=[127.0.0.1 192.168.39.108 localhost minikube no-preload-028021]
	I0708 20:56:34.634189   58678 provision.go:177] copyRemoteCerts
	I0708 20:56:34.634253   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:34.634281   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.637123   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637364   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.637396   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637609   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.637912   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.638172   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.638410   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:34.726761   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:34.751947   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:56:34.776165   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:56:34.803849   58678 provision.go:87] duration metric: took 389.036659ms to configureAuth
	I0708 20:56:34.803880   58678 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:34.804125   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:34.804202   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.808559   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.808925   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.808966   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.809164   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.809416   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809572   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809710   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.809874   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.810069   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.810097   58678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:35.096796   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:35.096822   58678 machine.go:97] duration metric: took 1.056870853s to provisionDockerMachine
	I0708 20:56:35.096834   58678 start.go:293] postStartSetup for "no-preload-028021" (driver="kvm2")
	I0708 20:56:35.096847   58678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:35.096864   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.097227   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:35.097266   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.100040   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100428   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.100449   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100637   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.100826   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.100967   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.101128   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.187796   58678 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:35.192221   58678 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:35.192248   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:35.192315   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:35.192383   58678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:35.192467   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:35.204227   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:35.230404   58678 start.go:296] duration metric: took 133.555408ms for postStartSetup
	I0708 20:56:35.230446   58678 fix.go:56] duration metric: took 21.04954132s for fixHost
	I0708 20:56:35.230464   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.233341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233654   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.233685   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233878   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.234070   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234248   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234413   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.234611   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:35.234834   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:35.234849   58678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:35.348439   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472195.300246165
	
	I0708 20:56:35.348459   58678 fix.go:216] guest clock: 1720472195.300246165
	I0708 20:56:35.348468   58678 fix.go:229] Guest: 2024-07-08 20:56:35.300246165 +0000 UTC Remote: 2024-07-08 20:56:35.230449891 +0000 UTC m=+338.995803708 (delta=69.796274ms)
	I0708 20:56:35.348487   58678 fix.go:200] guest clock delta is within tolerance: 69.796274ms
	I0708 20:56:35.348492   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 21.167624903s
	I0708 20:56:35.348509   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.348752   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:35.351300   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351779   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.351806   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351977   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352557   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352725   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352799   58678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:35.352839   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.352942   58678 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:35.352969   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.355646   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356037   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356071   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356117   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356267   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356470   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.356555   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356580   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356642   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.356706   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356770   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.356885   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.357020   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.357154   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.438344   58678 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:35.470518   58678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:35.628022   58678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:35.636390   58678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:35.636468   58678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:35.654729   58678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:35.654753   58678 start.go:494] detecting cgroup driver to use...
	I0708 20:56:35.654824   58678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:35.678564   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:35.697122   58678 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:35.697202   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:35.713388   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:35.728254   58678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:35.874433   58678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:36.062521   58678 docker.go:233] disabling docker service ...
	I0708 20:56:36.062614   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:36.081225   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:36.096855   58678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:36.229455   58678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:36.375525   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:36.390772   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:36.411762   58678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:36.411905   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.423149   58678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:36.423218   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.434145   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.447568   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.458758   58678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:36.469393   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.479663   58678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.501298   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.512407   58678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:36.522400   58678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:36.522469   58678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:36.536310   58678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:36.547955   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:36.680408   58678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:36.860344   58678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:36.860416   58678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:36.866153   58678 start.go:562] Will wait 60s for crictl version
	I0708 20:56:36.866221   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:36.871623   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:36.917564   58678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:36.917655   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.954595   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.985788   58678 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:32.805051   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:35.303979   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.303556   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.303581   59655 pod_ready.go:81] duration metric: took 5.506548207s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.303590   59655 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308571   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.308596   59655 pod_ready.go:81] duration metric: took 4.998994ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308610   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314379   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.314402   59655 pod_ready.go:81] duration metric: took 5.784289ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314411   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.942775   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:39.440483   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.987568   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:36.990699   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991105   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:36.991146   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991378   58678 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:36.996102   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:37.012228   58678 kubeadm.go:877] updating cluster {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:37.012390   58678 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:37.012439   58678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:37.050690   58678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:37.050715   58678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 20:56:37.050765   58678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.050988   58678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.051005   58678 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.051146   58678 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.051199   58678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.051323   58678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.051396   58678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.051560   58678 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.052826   58678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.052840   58678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.052853   58678 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052910   58678 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.052742   58678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.052744   58678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.237714   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.238720   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.246932   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0708 20:56:37.253938   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.256152   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.264291   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.304685   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.316620   58678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0708 20:56:37.316664   58678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.316710   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.352464   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.387003   58678 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0708 20:56:37.387039   58678 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.387078   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513840   58678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0708 20:56:37.513886   58678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.513925   58678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0708 20:56:37.513938   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513958   58678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.513987   58678 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0708 20:56:37.514000   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514016   58678 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.514054   58678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0708 20:56:37.514097   58678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.514061   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514136   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514138   58678 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0708 20:56:37.514078   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.514159   58678 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.514191   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514224   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.535635   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.535678   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.596995   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0708 20:56:37.597092   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.597102   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.651190   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0708 20:56:37.651320   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:37.695843   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0708 20:56:37.695944   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0708 20:56:37.695995   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0708 20:56:37.696018   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:37.696020   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.696052   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:37.695849   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0708 20:56:37.696071   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.695875   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0708 20:56:37.696117   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:37.696211   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:37.721410   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0708 20:56:37.721453   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0708 20:56:37.721536   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0708 20:56:37.721644   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:39.890974   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.19489331s)
	I0708 20:56:39.891017   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0708 20:56:39.891070   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (2.194976871s)
	I0708 20:56:39.891096   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0708 20:56:39.891107   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.194875907s)
	I0708 20:56:39.891117   58678 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891120   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0708 20:56:39.891156   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2: (2.194966409s)
	I0708 20:56:39.891175   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891184   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0708 20:56:39.891196   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.169535432s)
	I0708 20:56:39.891212   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0708 20:56:37.824606   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.824634   59655 pod_ready.go:81] duration metric: took 1.510214968s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.824646   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829963   59655 pod_ready.go:92] pod "kube-proxy-vq4l8" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.829988   59655 pod_ready.go:81] duration metric: took 5.334688ms for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829997   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338575   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:38.338611   59655 pod_ready.go:81] duration metric: took 508.60515ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338625   59655 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:40.346498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.939773   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:43.941838   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.962256   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.071056184s)
	I0708 20:56:41.962281   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0708 20:56:41.962304   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:41.962349   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:44.325730   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (2.363358371s)
	I0708 20:56:44.325760   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0708 20:56:44.325789   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:44.325853   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:42.845177   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:44.846215   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.441086   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:48.939348   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.588882   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.263001s)
	I0708 20:56:46.588909   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0708 20:56:46.588931   58678 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:46.588980   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:50.590689   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.001689035s)
	I0708 20:56:50.590724   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0708 20:56:50.590758   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:50.590813   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:47.345179   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:49.346736   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:51.846003   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:50.940095   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:53.441346   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:52.446198   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (1.855362154s)
	I0708 20:56:52.446229   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0708 20:56:52.446247   58678 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:52.446284   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:53.400379   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0708 20:56:53.400419   58678 cache_images.go:123] Successfully loaded all cached images
	I0708 20:56:53.400424   58678 cache_images.go:92] duration metric: took 16.349697925s to LoadCachedImages
	I0708 20:56:53.400436   58678 kubeadm.go:928] updating node { 192.168.39.108 8443 v1.30.2 crio true true} ...
	I0708 20:56:53.400599   58678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-028021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:53.400692   58678 ssh_runner.go:195] Run: crio config
	I0708 20:56:53.452091   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:56:53.452117   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:53.452131   58678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:53.452150   58678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.108 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-028021 NodeName:no-preload-028021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:53.452285   58678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-028021"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:53.452344   58678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:53.464447   58678 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:53.464522   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:53.474930   58678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0708 20:56:53.493701   58678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:53.511491   58678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0708 20:56:53.530848   58678 ssh_runner.go:195] Run: grep 192.168.39.108	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:53.534931   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:53.547590   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:53.658960   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:53.677127   58678 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021 for IP: 192.168.39.108
	I0708 20:56:53.677151   58678 certs.go:194] generating shared ca certs ...
	I0708 20:56:53.677169   58678 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:53.677296   58678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:53.677330   58678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:53.677338   58678 certs.go:256] generating profile certs ...
	I0708 20:56:53.677420   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.key
	I0708 20:56:53.677471   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key.c3084b2b
	I0708 20:56:53.677511   58678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key
	I0708 20:56:53.677613   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:53.677639   58678 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:53.677645   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:53.677677   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:53.677752   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:53.677785   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:53.677825   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:53.680483   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:53.739386   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:53.770850   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:53.813958   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:53.850256   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 20:56:53.891539   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:53.921136   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:53.948966   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:53.977129   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:54.002324   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:54.028222   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:54.054099   58678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:54.073386   58678 ssh_runner.go:195] Run: openssl version
	I0708 20:56:54.079883   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:54.092980   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097451   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097503   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.103507   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:54.115123   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:54.126757   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131534   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131579   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.137333   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:54.148368   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:54.159628   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164230   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164280   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.170068   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:54.182152   58678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:54.187146   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:54.193425   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:54.200491   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:54.207006   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:54.213285   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:54.220313   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:54.227497   58678 kubeadm.go:391] StartCluster: {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:54.227597   58678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:54.227657   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.273025   58678 cri.go:89] found id: ""
	I0708 20:56:54.273094   58678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:54.284942   58678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:54.284965   58678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:54.284972   58678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:54.285023   58678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:54.296666   58678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:54.297740   58678 kubeconfig.go:125] found "no-preload-028021" server: "https://192.168.39.108:8443"
	I0708 20:56:54.299928   58678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:54.310186   58678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.108
	I0708 20:56:54.310224   58678 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:54.310235   58678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:54.310290   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.351640   58678 cri.go:89] found id: ""
	I0708 20:56:54.351709   58678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:54.370292   58678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:54.380551   58678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:54.380571   58678 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:54.380611   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:54.391462   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:54.391525   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:54.401804   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:54.411423   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:54.411501   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:54.422126   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.432236   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:54.432299   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.443001   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:54.454210   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:54.454271   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:54.465426   58678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:54.477714   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:54.593844   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.651092   58678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057214047s)
	I0708 20:56:55.651120   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.862342   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.952093   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:56.070164   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:56.070232   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:53.846869   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.847242   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.941645   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:58.440406   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:56.570644   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.071067   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.099879   58678 api_server.go:72] duration metric: took 1.02971362s to wait for apiserver process to appear ...
	I0708 20:56:57.099907   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:57.099932   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.102677   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.102805   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.102854   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.143035   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.143069   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.600694   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.605315   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:00.605349   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:01.100628   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.106209   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.106235   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:58.345619   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:00.346515   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:01.600656   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.605348   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.605381   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.101023   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.105457   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.105490   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.600058   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.604370   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.604397   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.100641   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.105655   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:03.105685   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.600193   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.604714   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 20:57:03.617761   58678 api_server.go:141] control plane version: v1.30.2
	I0708 20:57:03.617795   58678 api_server.go:131] duration metric: took 6.517881236s to wait for apiserver health ...
	I0708 20:57:03.617805   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:57:03.617811   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:57:03.619739   58678 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:57:00.940450   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.448484   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.621363   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:57:03.635846   58678 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:57:03.667045   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:57:03.686236   58678 system_pods.go:59] 8 kube-system pods found
	I0708 20:57:03.686308   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:57:03.686322   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:57:03.686334   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:57:03.686348   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:57:03.686354   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 20:57:03.686363   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:57:03.686371   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:57:03.686379   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 20:57:03.686390   58678 system_pods.go:74] duration metric: took 19.320099ms to wait for pod list to return data ...
	I0708 20:57:03.686402   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:57:03.696401   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:57:03.696436   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 20:57:03.696449   58678 node_conditions.go:105] duration metric: took 10.038061ms to run NodePressure ...
	I0708 20:57:03.696474   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:57:03.981698   58678 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987357   58678 kubeadm.go:733] kubelet initialised
	I0708 20:57:03.987379   58678 kubeadm.go:734] duration metric: took 5.653044ms waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987387   58678 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:03.993341   58678 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:03.999133   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999165   58678 pod_ready.go:81] duration metric: took 5.798521ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:03.999177   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999188   58678 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.004640   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004666   58678 pod_ready.go:81] duration metric: took 5.471219ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.004676   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004685   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.011313   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011342   58678 pod_ready.go:81] duration metric: took 6.65044ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.011354   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011364   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.071038   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071092   58678 pod_ready.go:81] duration metric: took 59.716762ms for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.071105   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071114   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.470702   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470732   58678 pod_ready.go:81] duration metric: took 399.6044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.470743   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470753   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.871002   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871036   58678 pod_ready.go:81] duration metric: took 400.275337ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.871045   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871052   58678 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:05.270858   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270883   58678 pod_ready.go:81] duration metric: took 399.822389ms for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:05.270892   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270899   58678 pod_ready.go:38] duration metric: took 1.283504929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:05.270914   58678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 20:57:05.284879   58678 ops.go:34] apiserver oom_adj: -16
	I0708 20:57:05.284900   58678 kubeadm.go:591] duration metric: took 10.999921787s to restartPrimaryControlPlane
	I0708 20:57:05.284912   58678 kubeadm.go:393] duration metric: took 11.057424996s to StartCluster
	I0708 20:57:05.284931   58678 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.285024   58678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:57:05.287297   58678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.287607   58678 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 20:57:05.287708   58678 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 20:57:05.287790   58678 addons.go:69] Setting storage-provisioner=true in profile "no-preload-028021"
	I0708 20:57:05.287807   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:57:05.287809   58678 addons.go:69] Setting default-storageclass=true in profile "no-preload-028021"
	I0708 20:57:05.287845   58678 addons.go:69] Setting metrics-server=true in profile "no-preload-028021"
	I0708 20:57:05.287900   58678 addons.go:234] Setting addon metrics-server=true in "no-preload-028021"
	W0708 20:57:05.287912   58678 addons.go:243] addon metrics-server should already be in state true
	I0708 20:57:05.287946   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.287854   58678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-028021"
	I0708 20:57:05.287825   58678 addons.go:234] Setting addon storage-provisioner=true in "no-preload-028021"
	W0708 20:57:05.288007   58678 addons.go:243] addon storage-provisioner should already be in state true
	I0708 20:57:05.288040   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.288276   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288308   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288380   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288382   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288430   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288413   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.289690   58678 out.go:177] * Verifying Kubernetes components...
	I0708 20:57:05.291336   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:57:05.310203   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0708 20:57:05.310610   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.311107   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.311129   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.311527   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.311990   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.312026   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.332966   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0708 20:57:05.332984   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0708 20:57:05.333056   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I0708 20:57:05.333449   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333466   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333497   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333994   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334014   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334138   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334146   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334158   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334163   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334347   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334514   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.334640   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334683   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334822   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.335285   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.335304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.337444   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.338763   58678 addons.go:234] Setting addon default-storageclass=true in "no-preload-028021"
	W0708 20:57:05.338785   58678 addons.go:243] addon default-storageclass should already be in state true
	I0708 20:57:05.338814   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.339217   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.339304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.339800   58678 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 20:57:05.341280   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 20:57:05.341303   58678 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 20:57:05.341327   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.344838   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345488   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.345504   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345683   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.345892   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.346146   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.346326   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.359060   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0708 20:57:05.359692   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.360186   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.360207   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.360545   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.361128   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.361173   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.361352   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0708 20:57:05.361971   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.362509   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.362525   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.362911   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.363148   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.364747   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.366914   58678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:57:05.368450   58678 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.368467   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 20:57:05.368483   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.372067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372368   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.372387   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372767   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.373030   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.373235   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.373389   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.379255   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0708 20:57:05.379732   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.380405   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.380428   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.380832   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.381039   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.382973   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.383191   58678 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.383211   58678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 20:57:05.383231   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.386273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386682   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.386705   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386997   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.387184   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.387336   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.387497   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.506081   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:57:05.525373   58678 node_ready.go:35] waiting up to 6m0s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:05.594638   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 20:57:05.594665   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 20:57:05.615378   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.620306   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 20:57:05.620331   58678 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 20:57:05.639840   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.692078   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:05.692109   58678 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 20:57:05.756364   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:06.822244   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206830336s)
	I0708 20:57:06.822310   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18243745s)
	I0708 20:57:06.822323   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822385   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065981271s)
	I0708 20:57:06.822418   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822432   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822390   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822351   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822504   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822850   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822870   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822879   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822886   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822955   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.822971   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822976   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822993   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822995   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823009   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823020   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823010   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823051   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823154   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823164   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823366   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.823380   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823390   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825436   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.825455   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825465   58678 addons.go:475] Verifying addon metrics-server=true in "no-preload-028021"
	I0708 20:57:06.830088   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.830108   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.830406   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.830420   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.830423   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.832322   58678 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0708 20:57:02.845629   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.353827   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.940469   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:08.439911   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:06.833974   58678 addons.go:510] duration metric: took 1.546270475s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0708 20:57:07.529328   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:09.529406   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:11.030134   58678 node_ready.go:49] node "no-preload-028021" has status "Ready":"True"
	I0708 20:57:11.030162   58678 node_ready.go:38] duration metric: took 5.504751555s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:11.030174   58678 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:11.035309   58678 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039750   58678 pod_ready.go:92] pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.039772   58678 pod_ready.go:81] duration metric: took 4.436756ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039783   58678 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044726   58678 pod_ready.go:92] pod "etcd-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.044748   58678 pod_ready.go:81] duration metric: took 4.958058ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044756   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049083   58678 pod_ready.go:92] pod "kube-apiserver-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.049104   58678 pod_ready.go:81] duration metric: took 4.34014ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049115   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:07.846290   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.344964   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.939618   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.445191   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.056307   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.056817   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.063838   58678 pod_ready.go:92] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.063864   58678 pod_ready.go:81] duration metric: took 5.014740635s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.063875   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082486   58678 pod_ready.go:92] pod "kube-proxy-6p6l6" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.082529   58678 pod_ready.go:81] duration metric: took 18.642044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082545   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092312   58678 pod_ready.go:92] pod "kube-scheduler-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.092337   58678 pod_ready.go:81] duration metric: took 9.783638ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092347   58678 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.353120   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:57:16.353203   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:57:16.355269   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:57:16.355317   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:57:16.355404   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:57:16.355558   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:57:16.355708   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:57:16.355815   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:57:16.358151   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:57:16.358312   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:57:16.358411   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:57:16.358531   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:57:16.358641   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:57:16.358732   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:57:16.358798   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:57:16.358893   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:57:16.359004   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:57:16.359128   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:57:16.359209   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:57:16.359288   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:57:16.359384   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:57:16.359509   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:57:16.359614   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:57:16.359725   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:57:16.359794   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:57:16.359881   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:57:16.359963   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:57:16.360002   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:57:16.360099   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:57:16.361960   57466 out.go:204]   - Booting up control plane ...
	I0708 20:57:16.362053   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:57:16.362196   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:57:16.362283   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:57:16.362402   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:57:16.362589   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:57:16.362819   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:57:16.362930   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363170   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363242   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363473   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363580   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363786   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363873   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364093   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364247   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364435   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364445   57466 kubeadm.go:309] 
	I0708 20:57:16.364476   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:57:16.364533   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:57:16.364541   57466 kubeadm.go:309] 
	I0708 20:57:16.364601   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:57:16.364636   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:57:16.364796   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:57:16.364820   57466 kubeadm.go:309] 
	I0708 20:57:16.364958   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:57:16.365016   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:57:16.365057   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:57:16.365063   57466 kubeadm.go:309] 
	I0708 20:57:16.365208   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:57:16.365339   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:57:16.365356   57466 kubeadm.go:309] 
	I0708 20:57:16.365490   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:57:16.365589   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:57:16.365694   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:57:16.365869   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:57:16.365969   57466 kubeadm.go:309] 
	I0708 20:57:16.365972   57466 kubeadm.go:393] duration metric: took 7m56.670441698s to StartCluster
	I0708 20:57:16.366023   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:57:16.366090   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:57:16.435868   57466 cri.go:89] found id: ""
	I0708 20:57:16.435896   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.435904   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:57:16.435910   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:57:16.435969   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:57:16.478844   57466 cri.go:89] found id: ""
	I0708 20:57:16.478881   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.478896   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:57:16.478904   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:57:16.478974   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:57:16.517414   57466 cri.go:89] found id: ""
	I0708 20:57:16.517439   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.517448   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:57:16.517455   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:57:16.517516   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:57:16.557036   57466 cri.go:89] found id: ""
	I0708 20:57:16.557063   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.557074   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:57:16.557081   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:57:16.557153   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:57:16.593604   57466 cri.go:89] found id: ""
	I0708 20:57:16.593631   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.593641   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:57:16.593648   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:57:16.593704   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:57:16.634143   57466 cri.go:89] found id: ""
	I0708 20:57:16.634173   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.634183   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:57:16.634190   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:57:16.634248   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:57:16.676553   57466 cri.go:89] found id: ""
	I0708 20:57:16.676585   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.676595   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:57:16.676602   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:57:16.676663   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:57:16.715652   57466 cri.go:89] found id: ""
	I0708 20:57:16.715674   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.715682   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:57:16.715692   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:57:16.715703   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:57:16.730747   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:57:16.730776   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:57:16.814950   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:57:16.814976   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:57:16.815005   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:57:16.921144   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:57:16.921194   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:57:16.973261   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:57:16.973294   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 20:57:17.031242   57466 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0708 20:57:17.031307   57466 out.go:239] * 
	W0708 20:57:17.031362   57466 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.031389   57466 out.go:239] * 
	W0708 20:57:17.032214   57466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:57:17.035847   57466 out.go:177] 
	W0708 20:57:17.037198   57466 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.037247   57466 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0708 20:57:17.037274   57466 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0708 20:57:17.039077   57466 out.go:177] 
	I0708 20:57:12.345241   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:14.346235   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.347467   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.940334   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:17.943302   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:18.102691   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:20.599066   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:18.847908   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:21.345112   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:20.441347   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:22.939786   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:24.940449   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:22.600192   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:25.100175   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:23.346438   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:25.845181   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.439923   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:29.940540   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.600010   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:30.099104   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.845456   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:29.845526   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.440285   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.939729   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.101616   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.598135   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.345268   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.844782   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.845440   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.940110   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:38.940964   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.600034   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:39.099711   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.100745   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:38.847223   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.344382   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.441047   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.939510   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.599982   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:46.101913   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.345029   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:45.345390   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:45.939787   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:47.940956   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:49.941949   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:48.598871   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:50.600154   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:47.346271   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:49.346661   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:51.844897   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:52.439646   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:54.440569   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:52.604096   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:55.103841   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:54.345832   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:56.845398   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:56.440640   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:58.939537   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:57.598505   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:00.098797   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:58.848087   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:01.346566   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:00.940434   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:03.439927   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:02.602188   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:05.100284   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:03.848841   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:06.346912   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:05.441676   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:07.942369   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:07.599099   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:09.601188   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:08.848926   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:11.346458   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:10.439620   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:12.440274   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:14.939694   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:12.098918   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:14.099419   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:13.844947   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:15.845203   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:16.940812   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:18.941307   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:16.599322   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:19.098815   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:21.100160   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:17.845975   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:20.347071   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:21.439802   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:23.441183   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:23.598459   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:26.098717   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:22.844674   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:24.845210   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:26.848564   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:25.939783   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:28.439490   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:28.099236   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:30.599130   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:29.344306   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:31.345070   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:30.439832   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:32.440229   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:34.441525   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:32.600143   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:35.100068   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:33.345938   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:35.845421   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:36.939642   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:38.941263   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:37.599587   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:40.099121   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:37.845529   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:40.345830   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:41.441175   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:43.941076   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:42.099418   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:44.101452   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:42.844426   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:44.846831   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:45.941732   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:48.440398   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:46.599328   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:48.600055   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:51.099949   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:47.347094   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:49.846223   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:50.940172   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:52.940229   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:54.941034   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:53.100619   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:55.599681   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:52.347726   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:54.845461   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:56.846142   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:56.941957   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.439408   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:57.600406   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.600450   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.344802   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:01.345852   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:01.939259   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:03.940182   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:02.101218   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:04.600651   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:03.845810   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:05.846170   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:05.940757   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:08.439635   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:07.100571   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:09.100718   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:08.344894   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:10.346744   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:10.440413   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:12.440882   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:14.940151   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:11.601260   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:13.603589   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:16.112928   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:12.848135   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:15.346591   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:17.440326   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:19.440421   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:18.598791   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:20.600589   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:17.845413   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:19.849057   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:21.941414   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:24.441214   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:23.100854   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:25.599374   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:22.346925   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:24.845239   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:26.941311   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:28.948332   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:28.100928   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:30.600465   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:27.345835   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:29.846655   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:31.848193   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:31.440572   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:33.939354   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:33.100068   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:35.601159   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:34.345252   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:36.346479   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:35.939843   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:37.941381   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:38.100393   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.102157   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:38.844435   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.845328   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.438849   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:42.441256   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:44.442877   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:42.601119   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:45.101132   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:43.345149   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:45.345522   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:46.940287   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:48.941589   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:47.101717   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:49.598367   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:47.846030   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:49.846247   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:51.438745   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:53.441587   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:51.599309   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:54.105369   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:56.110085   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:52.347026   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:54.845971   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:55.939702   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:57.940731   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:58.598821   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:00.599435   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:57.345043   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:59.346796   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:01.347030   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:00.439467   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:02.443994   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:04.941721   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:02.599994   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:05.098379   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:03.845802   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:05.846016   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:07.439561   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:09.440326   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:07.099339   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:09.599746   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:08.345432   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:10.347888   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:11.940331   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:13.940496   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:12.100751   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:14.597860   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:12.349653   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:14.846452   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:16.440554   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:18.441219   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:19.434076   59107 pod_ready.go:81] duration metric: took 4m0.000896796s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	E0708 21:00:19.434112   59107 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0708 21:00:19.434131   59107 pod_ready.go:38] duration metric: took 4m10.050938227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:00:19.434157   59107 kubeadm.go:591] duration metric: took 4m18.183643708s to restartPrimaryControlPlane
	W0708 21:00:19.434219   59107 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 21:00:19.434258   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:00:16.598896   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:18.598974   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:20.599027   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:17.345157   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:19.345498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:21.346939   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:22.599140   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:24.600455   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:23.347325   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:25.846384   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:27.104536   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:29.598836   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:27.847635   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:30.345065   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:31.600246   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:34.099964   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:32.348256   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:34.846942   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:36.598075   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:38.599175   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:40.599720   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:37.345319   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:38.339580   59655 pod_ready.go:81] duration metric: took 4m0.000925316s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	E0708 21:00:38.339615   59655 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0708 21:00:38.339635   59655 pod_ready.go:38] duration metric: took 4m7.551446129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:00:38.339667   59655 kubeadm.go:591] duration metric: took 4m17.566917749s to restartPrimaryControlPlane
	W0708 21:00:38.339731   59655 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 21:00:38.339763   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:00:43.101768   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:45.102321   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:47.599770   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:50.100703   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:51.419295   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.985013246s)
	I0708 21:00:51.419373   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:00:51.438876   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:00:51.451558   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:00:51.463932   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:00:51.463959   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 21:00:51.464013   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 21:00:51.476729   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:00:51.476791   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:00:51.488357   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 21:00:51.499650   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:00:51.499720   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:00:51.510559   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 21:00:51.522747   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:00:51.522821   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:00:51.534156   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 21:00:51.545057   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:00:51.545123   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:00:51.556712   59107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:00:51.766960   59107 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:00:52.599619   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:55.102565   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:01.185862   59107 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:01:01.185936   59107 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:01:01.186061   59107 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:01:01.186246   59107 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:01:01.186375   59107 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:01:01.186477   59107 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:01:01.188387   59107 out.go:204]   - Generating certificates and keys ...
	I0708 21:01:01.188489   59107 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:01:01.188575   59107 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:01:01.188655   59107 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:01:01.188754   59107 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:01:01.188856   59107 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:01:01.188937   59107 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:01:01.189015   59107 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:01:01.189107   59107 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:01:01.189216   59107 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:01:01.189326   59107 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:01:01.189381   59107 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:01:01.189445   59107 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:01:01.189504   59107 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:01:01.189571   59107 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:01:01.189636   59107 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:01:01.189732   59107 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:01:01.189822   59107 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:01:01.189939   59107 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:01:01.190019   59107 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:01:01.192426   59107 out.go:204]   - Booting up control plane ...
	I0708 21:01:01.192527   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:01:01.192598   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:01:01.192674   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:01:01.192795   59107 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:01:01.192892   59107 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:01:01.192949   59107 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:01:01.193078   59107 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:01:01.193150   59107 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:01:01.193204   59107 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001227366s
	I0708 21:01:01.193274   59107 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:01:01.193329   59107 kubeadm.go:309] [api-check] The API server is healthy after 5.506719576s
	I0708 21:01:01.193428   59107 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 21:01:01.193574   59107 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 21:01:01.193655   59107 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 21:01:01.193854   59107 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-239931 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 21:01:01.193936   59107 kubeadm.go:309] [bootstrap-token] Using token: uu1yg0.6mx8u39sjlxfysca
	I0708 21:01:01.196508   59107 out.go:204]   - Configuring RBAC rules ...
	I0708 21:01:01.196638   59107 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 21:01:01.196748   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 21:01:01.196867   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 21:01:01.196978   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 21:01:01.197141   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 21:01:01.197217   59107 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 21:01:01.197316   59107 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 21:01:01.197355   59107 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 21:01:01.197397   59107 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 21:01:01.197403   59107 kubeadm.go:309] 
	I0708 21:01:01.197451   59107 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 21:01:01.197457   59107 kubeadm.go:309] 
	I0708 21:01:01.197542   59107 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 21:01:01.197555   59107 kubeadm.go:309] 
	I0708 21:01:01.197597   59107 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 21:01:01.197673   59107 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 21:01:01.197748   59107 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 21:01:01.197761   59107 kubeadm.go:309] 
	I0708 21:01:01.197850   59107 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 21:01:01.197860   59107 kubeadm.go:309] 
	I0708 21:01:01.197903   59107 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 21:01:01.197912   59107 kubeadm.go:309] 
	I0708 21:01:01.197971   59107 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 21:01:01.198059   59107 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 21:01:01.198155   59107 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 21:01:01.198165   59107 kubeadm.go:309] 
	I0708 21:01:01.198279   59107 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 21:01:01.198389   59107 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 21:01:01.198400   59107 kubeadm.go:309] 
	I0708 21:01:01.198515   59107 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token uu1yg0.6mx8u39sjlxfysca \
	I0708 21:01:01.198663   59107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 21:01:01.198697   59107 kubeadm.go:309] 	--control-plane 
	I0708 21:01:01.198706   59107 kubeadm.go:309] 
	I0708 21:01:01.198821   59107 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 21:01:01.198830   59107 kubeadm.go:309] 
	I0708 21:01:01.198942   59107 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token uu1yg0.6mx8u39sjlxfysca \
	I0708 21:01:01.199078   59107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 21:01:01.199095   59107 cni.go:84] Creating CNI manager for ""
	I0708 21:01:01.199104   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:01:01.201409   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 21:00:57.600428   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:00.101501   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:01.202540   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 21:01:01.214691   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 21:01:01.238039   59107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 21:01:01.238180   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:01.238204   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-239931 minikube.k8s.io/updated_at=2024_07_08T21_01_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=embed-certs-239931 minikube.k8s.io/primary=true
	I0708 21:01:01.255228   59107 ops.go:34] apiserver oom_adj: -16
	I0708 21:01:01.441736   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:01.942570   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.442775   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.941941   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:03.441910   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:03.942762   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:04.442791   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:04.942122   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.600102   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:04.601357   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:05.442031   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:05.942414   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:06.442353   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:06.942075   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:07.442007   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:07.941952   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:08.442578   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:08.942110   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:09.442438   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:09.942436   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:10.666697   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.326909913s)
	I0708 21:01:10.666766   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:10.684044   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:01:10.695291   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:01:10.705771   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:01:10.705790   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 21:01:10.705829   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 21:01:10.717858   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:01:10.717911   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:01:10.728721   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 21:01:10.738917   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:01:10.738985   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:01:10.749795   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 21:01:10.760976   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:01:10.761036   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:01:10.771625   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 21:01:10.781677   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:01:10.781738   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:01:10.791622   59655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:01:10.855152   59655 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:01:10.855246   59655 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:01:11.027005   59655 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:01:11.027132   59655 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:01:11.027245   59655 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:01:11.262898   59655 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:01:07.098267   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:09.099083   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:11.099398   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:11.264777   59655 out.go:204]   - Generating certificates and keys ...
	I0708 21:01:11.264897   59655 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:01:11.265011   59655 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:01:11.265143   59655 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:01:11.265245   59655 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:01:11.265331   59655 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:01:11.265412   59655 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:01:11.265516   59655 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:01:11.265601   59655 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:01:11.265692   59655 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:01:11.265806   59655 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:01:11.265883   59655 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:01:11.265979   59655 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:01:11.307094   59655 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:01:11.410219   59655 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:01:11.840751   59655 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:01:12.163906   59655 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:01:12.260797   59655 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:01:12.261513   59655 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:01:12.264128   59655 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:01:12.266095   59655 out.go:204]   - Booting up control plane ...
	I0708 21:01:12.266212   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:01:12.266301   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:01:12.267540   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:01:12.290823   59655 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:01:12.291578   59655 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:01:12.291693   59655 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:01:10.442308   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:10.942270   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:11.442233   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:11.942533   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:12.442040   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:12.942629   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:13.441853   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:13.565655   59107 kubeadm.go:1107] duration metric: took 12.327535547s to wait for elevateKubeSystemPrivileges
	W0708 21:01:13.565704   59107 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 21:01:13.565714   59107 kubeadm.go:393] duration metric: took 5m12.375759038s to StartCluster
	I0708 21:01:13.565736   59107 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:13.565845   59107 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:01:13.568610   59107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:13.568940   59107 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:01:13.568980   59107 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 21:01:13.569061   59107 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-239931"
	I0708 21:01:13.569098   59107 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-239931"
	W0708 21:01:13.569113   59107 addons.go:243] addon storage-provisioner should already be in state true
	I0708 21:01:13.569136   59107 addons.go:69] Setting metrics-server=true in profile "embed-certs-239931"
	I0708 21:01:13.569098   59107 addons.go:69] Setting default-storageclass=true in profile "embed-certs-239931"
	I0708 21:01:13.569169   59107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-239931"
	I0708 21:01:13.569178   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:01:13.569149   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.569185   59107 addons.go:234] Setting addon metrics-server=true in "embed-certs-239931"
	W0708 21:01:13.569244   59107 addons.go:243] addon metrics-server should already be in state true
	I0708 21:01:13.569274   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.569617   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569639   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569648   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.569671   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.569673   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569698   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.570670   59107 out.go:177] * Verifying Kubernetes components...
	I0708 21:01:13.572338   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:01:13.590692   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40615
	I0708 21:01:13.590708   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0708 21:01:13.590701   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0708 21:01:13.591271   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591375   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591622   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591792   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.591806   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.591888   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.591909   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.592348   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.592368   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.592387   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.592422   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.592655   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.593065   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.593092   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.593568   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.594139   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.594196   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.596834   59107 addons.go:234] Setting addon default-storageclass=true in "embed-certs-239931"
	W0708 21:01:13.596857   59107 addons.go:243] addon default-storageclass should already be in state true
	I0708 21:01:13.596892   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.597258   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.597278   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.615398   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0708 21:01:13.616090   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.617374   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.617395   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.617542   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37809
	I0708 21:01:13.618025   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.618066   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.618450   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.618538   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.618563   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.618953   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.619151   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.621015   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.622114   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I0708 21:01:13.622533   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.623046   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.623071   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.623346   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.623757   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.624750   59107 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 21:01:13.625744   59107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:01:13.626604   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 21:01:13.626626   59107 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 21:01:13.626650   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.627717   59107 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:13.627737   59107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 21:01:13.627756   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.628207   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.628245   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.631548   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.633692   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.633737   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.634732   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.634960   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.635186   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.635262   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.635282   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.635415   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.635581   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.635946   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.636122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.636282   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.636468   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.650948   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0708 21:01:13.651543   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.652143   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.652165   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.652659   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.652835   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.654717   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.654971   59107 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:13.654988   59107 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 21:01:13.655006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.658670   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.659361   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.659475   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.659800   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.660109   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.660275   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.660406   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.813860   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:01:13.832841   59107 node_ready.go:35] waiting up to 6m0s for node "embed-certs-239931" to be "Ready" ...
	I0708 21:01:13.842398   59107 node_ready.go:49] node "embed-certs-239931" has status "Ready":"True"
	I0708 21:01:13.842420   59107 node_ready.go:38] duration metric: took 9.540746ms for node "embed-certs-239931" to be "Ready" ...
	I0708 21:01:13.842430   59107 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:13.853426   59107 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.861421   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.861451   59107 pod_ready.go:81] duration metric: took 7.991733ms for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.861466   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.873198   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.873228   59107 pod_ready.go:81] duration metric: took 11.754017ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.873243   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.882509   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.882560   59107 pod_ready.go:81] duration metric: took 9.307056ms for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.882574   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.890814   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.890843   59107 pod_ready.go:81] duration metric: took 8.26049ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.890854   59107 pod_ready.go:38] duration metric: took 48.414688ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:13.890872   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:13.890934   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:13.913170   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 21:01:13.913199   59107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 21:01:13.936334   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:13.942642   59107 api_server.go:72] duration metric: took 373.624334ms to wait for apiserver process to appear ...
	I0708 21:01:13.942673   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:13.942696   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 21:01:13.947241   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 21:01:13.948330   59107 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:13.948354   59107 api_server.go:131] duration metric: took 5.673644ms to wait for apiserver health ...
	I0708 21:01:13.948364   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:13.968333   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:13.999888   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 21:01:13.999920   59107 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 21:01:14.072446   59107 system_pods.go:59] 5 kube-system pods found
	I0708 21:01:14.072553   59107 system_pods.go:61] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.072575   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.072594   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.072608   59107 system_pods.go:61] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending
	I0708 21:01:14.072621   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.072637   59107 system_pods.go:74] duration metric: took 124.266452ms to wait for pod list to return data ...
	I0708 21:01:14.072663   59107 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:14.111310   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:14.111337   59107 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 21:01:14.196596   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:14.248043   59107 default_sa.go:45] found service account: "default"
	I0708 21:01:14.248075   59107 default_sa.go:55] duration metric: took 175.396297ms for default service account to be created ...
	I0708 21:01:14.248086   59107 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:14.381129   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.381166   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.381490   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.381507   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.381517   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.381525   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.383203   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:14.383213   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.383229   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.430533   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.430558   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.430835   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:14.431498   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.431558   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.440088   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.440129   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.440140   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.440148   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.440156   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.440162   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.440171   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.440176   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.440199   59107 retry.go:31] will retry after 211.74015ms: missing components: kube-dns, kube-proxy
	I0708 21:01:14.660845   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.660901   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.660916   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.660928   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.660938   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.660946   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.660990   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.661002   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.661036   59107 retry.go:31] will retry after 318.627165ms: missing components: kube-dns, kube-proxy
	I0708 21:01:14.988296   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.988336   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.988348   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.988359   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.988369   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.988376   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.988388   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.988398   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.988425   59107 retry.go:31] will retry after 333.622066ms: missing components: kube-dns, kube-proxy
	I0708 21:01:15.024853   59107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.056470802s)
	I0708 21:01:15.024902   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.024914   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.025237   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.025264   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.025266   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.025279   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.025288   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.025550   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.025566   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.348381   59107 system_pods.go:86] 8 kube-system pods found
	I0708 21:01:15.348419   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.348430   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.348440   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:15.348448   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:15.348455   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:15.348464   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:15.348473   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:15.348483   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:15.348502   59107 retry.go:31] will retry after 415.910372ms: missing components: kube-dns, kube-proxy
	I0708 21:01:15.736384   59107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.539741133s)
	I0708 21:01:15.736440   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.736456   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.736743   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.736782   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.736763   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.736803   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.736851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.737097   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.737135   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.737148   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.737157   59107 addons.go:475] Verifying addon metrics-server=true in "embed-certs-239931"
	I0708 21:01:15.739025   59107 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0708 21:01:13.102963   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:15.601580   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:16.101049   58678 pod_ready.go:81] duration metric: took 4m0.00868677s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 21:01:16.101081   58678 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0708 21:01:16.101094   58678 pod_ready.go:38] duration metric: took 4m5.070908601s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:16.101112   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:16.101147   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:16.101210   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:16.175601   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:16.175631   58678 cri.go:89] found id: ""
	I0708 21:01:16.175642   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:16.175703   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.182938   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:16.183013   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:16.261385   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:16.261411   58678 cri.go:89] found id: ""
	I0708 21:01:16.261423   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:16.261483   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.266231   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:16.266310   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:15.741167   59107 addons.go:510] duration metric: took 2.172185316s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0708 21:01:15.890659   59107 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:15.890702   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.890713   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.890723   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:15.890731   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:15.890738   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:15.890745   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Running
	I0708 21:01:15.890751   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:15.890759   59107 system_pods.go:89] "metrics-server-569cc877fc-f2dkn" [1d3c3e8e-356d-40b9-8add-35eec096e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:15.890772   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:15.890790   59107 retry.go:31] will retry after 557.749423ms: missing components: kube-dns
	I0708 21:01:16.457046   59107 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:16.457093   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:16.457105   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:16.457114   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:16.457124   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:16.457131   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:16.457137   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Running
	I0708 21:01:16.457143   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:16.457153   59107 system_pods.go:89] "metrics-server-569cc877fc-f2dkn" [1d3c3e8e-356d-40b9-8add-35eec096e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:16.457173   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:16.457183   59107 system_pods.go:126] duration metric: took 2.209089992s to wait for k8s-apps to be running ...
	I0708 21:01:16.457196   59107 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:16.457251   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:16.474652   59107 system_svc.go:56] duration metric: took 17.443712ms WaitForService to wait for kubelet
	I0708 21:01:16.474691   59107 kubeadm.go:576] duration metric: took 2.905677883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:16.474715   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:16.478431   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:16.478456   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:16.478480   59107 node_conditions.go:105] duration metric: took 3.758433ms to run NodePressure ...
	I0708 21:01:16.478502   59107 start.go:240] waiting for startup goroutines ...
	I0708 21:01:16.478515   59107 start.go:245] waiting for cluster config update ...
	I0708 21:01:16.478529   59107 start.go:254] writing updated cluster config ...
	I0708 21:01:16.478860   59107 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:16.536046   59107 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:16.538131   59107 out.go:177] * Done! kubectl is now configured to use "embed-certs-239931" cluster and "default" namespace by default
	I0708 21:01:12.440116   59655 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:01:12.440237   59655 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:01:13.441567   59655 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001312349s
	I0708 21:01:13.441690   59655 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:01:18.943345   59655 kubeadm.go:309] [api-check] The API server is healthy after 5.501634999s
	I0708 21:01:18.963728   59655 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 21:01:18.980036   59655 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 21:01:19.028362   59655 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 21:01:19.028635   59655 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-071971 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 21:01:19.051700   59655 kubeadm.go:309] [bootstrap-token] Using token: guoi3f.tsy4dvdlokyfqa2b
	I0708 21:01:19.053224   59655 out.go:204]   - Configuring RBAC rules ...
	I0708 21:01:19.053323   59655 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 21:01:19.063058   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 21:01:19.077711   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 21:01:19.090415   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 21:01:19.095539   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 21:01:19.101465   59655 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 21:01:19.351634   59655 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 21:01:19.809053   59655 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 21:01:20.359069   59655 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 21:01:20.359125   59655 kubeadm.go:309] 
	I0708 21:01:20.359193   59655 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 21:01:20.359227   59655 kubeadm.go:309] 
	I0708 21:01:20.359368   59655 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 21:01:20.359379   59655 kubeadm.go:309] 
	I0708 21:01:20.359439   59655 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 21:01:20.359553   59655 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 21:01:20.359613   59655 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 21:01:20.359624   59655 kubeadm.go:309] 
	I0708 21:01:20.359686   59655 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 21:01:20.359694   59655 kubeadm.go:309] 
	I0708 21:01:20.359733   59655 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 21:01:20.359740   59655 kubeadm.go:309] 
	I0708 21:01:20.359787   59655 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 21:01:20.359899   59655 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 21:01:20.359994   59655 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 21:01:20.360003   59655 kubeadm.go:309] 
	I0708 21:01:20.360096   59655 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 21:01:20.360194   59655 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 21:01:20.360202   59655 kubeadm.go:309] 
	I0708 21:01:20.360311   59655 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token guoi3f.tsy4dvdlokyfqa2b \
	I0708 21:01:20.360468   59655 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 21:01:20.360507   59655 kubeadm.go:309] 	--control-plane 
	I0708 21:01:20.360516   59655 kubeadm.go:309] 
	I0708 21:01:20.360628   59655 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 21:01:20.360639   59655 kubeadm.go:309] 
	I0708 21:01:20.360765   59655 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token guoi3f.tsy4dvdlokyfqa2b \
	I0708 21:01:20.360891   59655 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 21:01:20.361857   59655 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:01:20.361894   59655 cni.go:84] Creating CNI manager for ""
	I0708 21:01:20.361910   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:01:20.363579   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 21:01:16.309299   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:16.309328   58678 cri.go:89] found id: ""
	I0708 21:01:16.309337   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:16.309403   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.314236   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:16.314320   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:16.371891   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:16.371919   58678 cri.go:89] found id: ""
	I0708 21:01:16.371937   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:16.372008   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.380409   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:16.380480   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:16.428411   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:16.428441   58678 cri.go:89] found id: ""
	I0708 21:01:16.428452   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:16.428514   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.433310   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:16.433390   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:16.474785   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:16.474807   58678 cri.go:89] found id: ""
	I0708 21:01:16.474816   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:16.474882   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.480849   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:16.480933   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:16.529115   58678 cri.go:89] found id: ""
	I0708 21:01:16.529136   58678 logs.go:276] 0 containers: []
	W0708 21:01:16.529146   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:16.529153   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:16.529222   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:16.576499   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:16.576519   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:16.576527   58678 cri.go:89] found id: ""
	I0708 21:01:16.576536   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:16.576584   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.581261   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.587704   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:16.587733   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:16.651329   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:16.651385   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:16.706341   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:16.706380   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:17.302518   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:17.302570   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:17.373619   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:17.373651   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:17.414687   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:17.414722   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:17.470462   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:17.470499   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:17.487151   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:17.487189   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:17.625611   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:17.625655   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:17.673291   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:17.673325   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:17.712222   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:17.712253   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:17.752635   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:17.752665   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:17.794056   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:17.794085   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:20.341805   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:20.362405   58678 api_server.go:72] duration metric: took 4m15.074761342s to wait for apiserver process to appear ...
	I0708 21:01:20.362430   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:20.362465   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:20.362523   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:20.409947   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:20.409974   58678 cri.go:89] found id: ""
	I0708 21:01:20.409983   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:20.410040   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.414415   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:20.414476   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:20.463162   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:20.463186   58678 cri.go:89] found id: ""
	I0708 21:01:20.463196   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:20.463263   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.468905   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:20.468986   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:20.514265   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:20.514291   58678 cri.go:89] found id: ""
	I0708 21:01:20.514299   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:20.514357   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.519003   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:20.519081   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:20.565097   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:20.565122   58678 cri.go:89] found id: ""
	I0708 21:01:20.565132   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:20.565190   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.569971   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:20.570048   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:20.614435   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:20.614459   58678 cri.go:89] found id: ""
	I0708 21:01:20.614469   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:20.614525   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.619745   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:20.619824   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:20.660213   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:20.660235   58678 cri.go:89] found id: ""
	I0708 21:01:20.660242   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:20.660292   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.664740   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:20.664822   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:20.710279   58678 cri.go:89] found id: ""
	I0708 21:01:20.710300   58678 logs.go:276] 0 containers: []
	W0708 21:01:20.710307   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:20.710312   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:20.710359   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:20.751880   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:20.751906   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:20.751910   58678 cri.go:89] found id: ""
	I0708 21:01:20.751917   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:20.752028   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.756530   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.760679   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:20.760705   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:20.800525   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:20.800556   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:20.845629   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:20.845666   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:20.364837   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 21:01:20.376977   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 21:01:20.400133   59655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 21:01:20.400241   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:20.400291   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-071971 minikube.k8s.io/updated_at=2024_07_08T21_01_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=default-k8s-diff-port-071971 minikube.k8s.io/primary=true
	I0708 21:01:20.597429   59655 ops.go:34] apiserver oom_adj: -16
	I0708 21:01:20.597490   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.098582   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.597812   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:22.097790   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.356988   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:21.357025   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:21.416130   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:21.416160   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:21.431831   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:21.431865   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:21.479568   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:21.479597   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:21.527937   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:21.527970   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:21.569569   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:21.569605   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:21.691646   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:21.691678   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:21.737949   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:21.737975   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:21.789038   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:21.789069   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:21.831677   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:21.831703   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:24.380502   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 21:01:24.385139   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 21:01:24.386116   58678 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:24.386137   58678 api_server.go:131] duration metric: took 4.023699983s to wait for apiserver health ...
	I0708 21:01:24.386146   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:24.386171   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:24.386225   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:24.423786   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:24.423809   58678 cri.go:89] found id: ""
	I0708 21:01:24.423816   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:24.423869   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.428385   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:24.428447   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:24.467186   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:24.467206   58678 cri.go:89] found id: ""
	I0708 21:01:24.467213   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:24.467269   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.472208   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:24.472273   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:24.511157   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:24.511188   58678 cri.go:89] found id: ""
	I0708 21:01:24.511199   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:24.511266   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.516077   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:24.516144   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:24.556095   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:24.556115   58678 cri.go:89] found id: ""
	I0708 21:01:24.556122   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:24.556171   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.560735   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:24.560795   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:24.602473   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:24.602498   58678 cri.go:89] found id: ""
	I0708 21:01:24.602508   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:24.602562   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.608926   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:24.609003   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:24.653230   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:24.653258   58678 cri.go:89] found id: ""
	I0708 21:01:24.653267   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:24.653327   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.657884   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:24.657954   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:24.700775   58678 cri.go:89] found id: ""
	I0708 21:01:24.700800   58678 logs.go:276] 0 containers: []
	W0708 21:01:24.700810   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:24.700817   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:24.700876   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:24.738593   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:24.738619   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:24.738625   58678 cri.go:89] found id: ""
	I0708 21:01:24.738633   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:24.738689   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.743324   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.747684   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:24.747709   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:24.800431   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:24.800467   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:24.910702   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:24.910738   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:24.967323   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:24.967355   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:25.012335   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:25.012367   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:25.393024   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:25.393064   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:25.449280   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:25.449315   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:25.488676   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:25.488703   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:25.503705   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:25.503734   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:25.551111   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:25.551155   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:25.598388   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:25.598425   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:25.642052   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:25.642087   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:25.680632   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:25.680665   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:22.597628   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:23.098128   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:23.597756   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:24.097555   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:24.598149   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:25.098149   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:25.598255   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:26.097514   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:26.598211   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:27.097610   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.229251   58678 system_pods.go:59] 8 kube-system pods found
	I0708 21:01:28.229286   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running
	I0708 21:01:28.229293   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running
	I0708 21:01:28.229298   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running
	I0708 21:01:28.229304   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running
	I0708 21:01:28.229308   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 21:01:28.229312   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running
	I0708 21:01:28.229321   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:28.229327   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 21:01:28.229337   58678 system_pods.go:74] duration metric: took 3.843183956s to wait for pod list to return data ...
	I0708 21:01:28.229347   58678 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:28.232297   58678 default_sa.go:45] found service account: "default"
	I0708 21:01:28.232323   58678 default_sa.go:55] duration metric: took 2.96709ms for default service account to be created ...
	I0708 21:01:28.232333   58678 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:28.240720   58678 system_pods.go:86] 8 kube-system pods found
	I0708 21:01:28.240750   58678 system_pods.go:89] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running
	I0708 21:01:28.240755   58678 system_pods.go:89] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running
	I0708 21:01:28.240760   58678 system_pods.go:89] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running
	I0708 21:01:28.240765   58678 system_pods.go:89] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running
	I0708 21:01:28.240770   58678 system_pods.go:89] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 21:01:28.240774   58678 system_pods.go:89] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running
	I0708 21:01:28.240781   58678 system_pods.go:89] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:28.240787   58678 system_pods.go:89] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 21:01:28.240794   58678 system_pods.go:126] duration metric: took 8.454141ms to wait for k8s-apps to be running ...
	I0708 21:01:28.240804   58678 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:28.240855   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:28.256600   58678 system_svc.go:56] duration metric: took 15.789082ms WaitForService to wait for kubelet
	I0708 21:01:28.256630   58678 kubeadm.go:576] duration metric: took 4m22.968988646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:28.256654   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:28.260384   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:28.260402   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:28.260412   58678 node_conditions.go:105] duration metric: took 3.753004ms to run NodePressure ...
	I0708 21:01:28.260422   58678 start.go:240] waiting for startup goroutines ...
	I0708 21:01:28.260429   58678 start.go:245] waiting for cluster config update ...
	I0708 21:01:28.260438   58678 start.go:254] writing updated cluster config ...
	I0708 21:01:28.260686   58678 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:28.311517   58678 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:28.313560   58678 out.go:177] * Done! kubectl is now configured to use "no-preload-028021" cluster and "default" namespace by default
	I0708 21:01:27.598457   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.098475   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.598380   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:29.097496   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:29.598229   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:30.097844   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:30.598323   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:31.097781   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:31.598085   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:32.098438   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:32.598450   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.098414   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.597823   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.688717   59655 kubeadm.go:1107] duration metric: took 13.288534329s to wait for elevateKubeSystemPrivileges
	W0708 21:01:33.688756   59655 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 21:01:33.688765   59655 kubeadm.go:393] duration metric: took 5m12.976251287s to StartCluster
	I0708 21:01:33.688782   59655 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:33.688874   59655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:01:33.690446   59655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:33.690691   59655 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:01:33.690814   59655 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 21:01:33.690875   59655 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071971"
	I0708 21:01:33.690893   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:01:33.690907   59655 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071971"
	I0708 21:01:33.690902   59655 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071971"
	W0708 21:01:33.690915   59655 addons.go:243] addon storage-provisioner should already be in state true
	I0708 21:01:33.690914   59655 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071971"
	I0708 21:01:33.690939   59655 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071971"
	I0708 21:01:33.690945   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.690957   59655 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071971"
	W0708 21:01:33.690968   59655 addons.go:243] addon metrics-server should already be in state true
	I0708 21:01:33.691002   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.691272   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691274   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691294   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.691299   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.691323   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691361   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.692506   59655 out.go:177] * Verifying Kubernetes components...
	I0708 21:01:33.694134   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:01:33.708343   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0708 21:01:33.708681   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0708 21:01:33.708849   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.709011   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.709402   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.709421   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.709559   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.709578   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.709795   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.709864   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.710365   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.710411   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.710417   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.710445   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.710809   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0708 21:01:33.711278   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.711858   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.711892   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.712294   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.712604   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.716565   59655 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071971"
	W0708 21:01:33.716590   59655 addons.go:243] addon default-storageclass should already be in state true
	I0708 21:01:33.716620   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.716990   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.717041   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.728113   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0708 21:01:33.728257   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0708 21:01:33.728694   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.728742   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.729182   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.729211   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.729331   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.729353   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.729605   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.729663   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.729781   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.729846   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.731832   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.731878   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.734021   59655 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:01:33.734026   59655 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 21:01:33.736062   59655 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:33.736094   59655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 21:01:33.736122   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.736174   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 21:01:33.736192   59655 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 21:01:33.736222   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.736793   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0708 21:01:33.737419   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.739820   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.739837   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.740075   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740272   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.740463   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.740484   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.740967   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.741060   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.741213   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.741225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.741279   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.741309   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.741438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.741596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.741587   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.741730   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.741820   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.758223   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0708 21:01:33.758739   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.759237   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.759254   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.759633   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.759909   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.761455   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.761644   59655 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:33.761656   59655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 21:01:33.761669   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.764245   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.764541   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.764563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.764701   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.764872   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.765022   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.765126   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.926862   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:01:33.980155   59655 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071971" to be "Ready" ...
	I0708 21:01:33.993505   59655 node_ready.go:49] node "default-k8s-diff-port-071971" has status "Ready":"True"
	I0708 21:01:33.993526   59655 node_ready.go:38] duration metric: took 13.344616ms for node "default-k8s-diff-port-071971" to be "Ready" ...
	I0708 21:01:33.993534   59655 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:34.001402   59655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:34.045900   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:34.058039   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 21:01:34.058059   59655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 21:01:34.102931   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:34.121513   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 21:01:34.121541   59655 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 21:01:34.190181   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:34.190208   59655 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 21:01:34.232200   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:35.071867   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.071888   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.071977   59655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.026035336s)
	I0708 21:01:35.072026   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.072044   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.072157   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.072192   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.072205   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.072212   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.073887   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.073887   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.073917   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.073989   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.074003   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.074013   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.073907   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.074111   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.074438   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.074461   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.146813   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.146840   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.147181   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.147201   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.337952   59655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.105709862s)
	I0708 21:01:35.338010   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.338023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.338415   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.338447   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.338461   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.338471   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.338484   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.338733   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.338751   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.338763   59655 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-071971"
	I0708 21:01:35.340678   59655 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0708 21:01:35.341902   59655 addons.go:510] duration metric: took 1.651084154s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0708 21:01:36.011439   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:37.008538   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.008567   59655 pod_ready.go:81] duration metric: took 3.0071384s for pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.008582   59655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.013291   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.013313   59655 pod_ready.go:81] duration metric: took 4.723566ms for pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.013326   59655 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.017974   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.017997   59655 pod_ready.go:81] duration metric: took 4.66297ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.018009   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.022526   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.022550   59655 pod_ready.go:81] duration metric: took 4.533312ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.022563   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.027009   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.027032   59655 pod_ready.go:81] duration metric: took 4.462202ms for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.027042   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l2mdd" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.406030   59655 pod_ready.go:92] pod "kube-proxy-l2mdd" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.406055   59655 pod_ready.go:81] duration metric: took 379.00677ms for pod "kube-proxy-l2mdd" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.406064   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.806120   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.806141   59655 pod_ready.go:81] duration metric: took 400.070718ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.806151   59655 pod_ready.go:38] duration metric: took 3.812606006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:37.806165   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:37.806214   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:37.822846   59655 api_server.go:72] duration metric: took 4.132126389s to wait for apiserver process to appear ...
	I0708 21:01:37.822872   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:37.822889   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 21:01:37.827017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 21:01:37.827906   59655 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:37.827930   59655 api_server.go:131] duration metric: took 5.051704ms to wait for apiserver health ...
	I0708 21:01:37.827938   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:38.010909   59655 system_pods.go:59] 9 kube-system pods found
	I0708 21:01:38.010937   59655 system_pods.go:61] "coredns-7db6d8ff4d-8msvk" [38c1e0eb-5eb4-4acb-a5ae-c72871884e3d] Running
	I0708 21:01:38.010942   59655 system_pods.go:61] "coredns-7db6d8ff4d-hq7zj" [ddb0f99d-a91d-4bb7-96e7-695b6101a601] Running
	I0708 21:01:38.010946   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [e3399214-404c-423e-9648-b4d920028a92] Running
	I0708 21:01:38.010949   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [7b726b49-c243-4126-b6d2-fc12abc9a042] Running
	I0708 21:01:38.010953   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [6a731125-daa4-4da1-b9e0-1206da592fde] Running
	I0708 21:01:38.010956   59655 system_pods.go:61] "kube-proxy-l2mdd" [b1d70ae2-ed86-49bd-8910-a12c5cd8091a] Running
	I0708 21:01:38.010959   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [dc238033-038e-49ec-ba48-392b0ec2f7bd] Running
	I0708 21:01:38.010965   59655 system_pods.go:61] "metrics-server-569cc877fc-k8vhl" [09f957f3-d76f-4f21-b9a6-e5b249d07e1e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:38.010970   59655 system_pods.go:61] "storage-provisioner" [805a8fdb-ed9e-4f80-a2c9-7d8a0155b228] Running
	I0708 21:01:38.010979   59655 system_pods.go:74] duration metric: took 183.034922ms to wait for pod list to return data ...
	I0708 21:01:38.010987   59655 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:38.205307   59655 default_sa.go:45] found service account: "default"
	I0708 21:01:38.205331   59655 default_sa.go:55] duration metric: took 194.338319ms for default service account to be created ...
	I0708 21:01:38.205340   59655 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:38.410958   59655 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:38.410988   59655 system_pods.go:89] "coredns-7db6d8ff4d-8msvk" [38c1e0eb-5eb4-4acb-a5ae-c72871884e3d] Running
	I0708 21:01:38.410995   59655 system_pods.go:89] "coredns-7db6d8ff4d-hq7zj" [ddb0f99d-a91d-4bb7-96e7-695b6101a601] Running
	I0708 21:01:38.411000   59655 system_pods.go:89] "etcd-default-k8s-diff-port-071971" [e3399214-404c-423e-9648-b4d920028a92] Running
	I0708 21:01:38.411005   59655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071971" [7b726b49-c243-4126-b6d2-fc12abc9a042] Running
	I0708 21:01:38.411009   59655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071971" [6a731125-daa4-4da1-b9e0-1206da592fde] Running
	I0708 21:01:38.411013   59655 system_pods.go:89] "kube-proxy-l2mdd" [b1d70ae2-ed86-49bd-8910-a12c5cd8091a] Running
	I0708 21:01:38.411017   59655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071971" [dc238033-038e-49ec-ba48-392b0ec2f7bd] Running
	I0708 21:01:38.411024   59655 system_pods.go:89] "metrics-server-569cc877fc-k8vhl" [09f957f3-d76f-4f21-b9a6-e5b249d07e1e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:38.411029   59655 system_pods.go:89] "storage-provisioner" [805a8fdb-ed9e-4f80-a2c9-7d8a0155b228] Running
	I0708 21:01:38.411040   59655 system_pods.go:126] duration metric: took 205.695019ms to wait for k8s-apps to be running ...
	I0708 21:01:38.411050   59655 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:38.411092   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:38.428218   59655 system_svc.go:56] duration metric: took 17.158999ms WaitForService to wait for kubelet
	I0708 21:01:38.428248   59655 kubeadm.go:576] duration metric: took 4.737530934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:38.428270   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:38.606369   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:38.606394   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:38.606404   59655 node_conditions.go:105] duration metric: took 178.130401ms to run NodePressure ...
	I0708 21:01:38.606415   59655 start.go:240] waiting for startup goroutines ...
	I0708 21:01:38.606423   59655 start.go:245] waiting for cluster config update ...
	I0708 21:01:38.606432   59655 start.go:254] writing updated cluster config ...
	I0708 21:01:38.606686   59655 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:38.657280   59655 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:38.659556   59655 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-071971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.242589478Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720472780242564672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86610ee3-b22f-45cd-8510-a98ada06b975 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.243268383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2bde3c0-1dfd-4b2d-8be1-cc7771c05f9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.243321203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2bde3c0-1dfd-4b2d-8be1-cc7771c05f9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.243405149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f2bde3c0-1dfd-4b2d-8be1-cc7771c05f9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.277458580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43ae91b1-fd0f-4ac4-ab52-2ee7988b191f name=/runtime.v1.RuntimeService/Version
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.277554510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43ae91b1-fd0f-4ac4-ab52-2ee7988b191f name=/runtime.v1.RuntimeService/Version
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.278831798Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ddf8a8e-55c0-4110-b1f9-de872577721d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.279240422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720472780279215110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ddf8a8e-55c0-4110-b1f9-de872577721d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.279982303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a9764a7-d2c5-4c20-8867-669a6ea7b14f name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.280038675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a9764a7-d2c5-4c20-8867-669a6ea7b14f name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.280071415Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6a9764a7-d2c5-4c20-8867-669a6ea7b14f name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.313881477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6e41988-3b92-4eb4-af55-2eb84f2126f2 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.313954444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6e41988-3b92-4eb4-af55-2eb84f2126f2 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.315600893Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2013d392-2e92-4b2e-86f4-896062cbde82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.315984958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720472780315960556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2013d392-2e92-4b2e-86f4-896062cbde82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.316717416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67e387b2-0a45-402b-a41e-39a65c425131 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.316773952Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67e387b2-0a45-402b-a41e-39a65c425131 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.316808875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=67e387b2-0a45-402b-a41e-39a65c425131 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.353728860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a154667-3516-482c-a466-bde4917ca120 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.353808044Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a154667-3516-482c-a466-bde4917ca120 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.355285086Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3966877-44ed-4fc0-b068-bdc8b72e28bb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.355752483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720472780355723746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3966877-44ed-4fc0-b068-bdc8b72e28bb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.356473358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c2e0086-6843-4ebe-9786-2f8b60b2ba22 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.356541853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c2e0086-6843-4ebe-9786-2f8b60b2ba22 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:06:20 old-k8s-version-914355 crio[647]: time="2024-07-08 21:06:20.356595515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6c2e0086-6843-4ebe-9786-2f8b60b2ba22 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul 8 20:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050631] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039837] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.623579] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul 8 20:49] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.602924] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.192762] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.057317] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062771] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.200906] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.157667] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.288740] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +6.100045] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.067577] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.762847] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +12.466178] kauditd_printk_skb: 46 callbacks suppressed
	[Jul 8 20:53] systemd-fstab-generator[5013]: Ignoring "noauto" option for root device
	[Jul 8 20:55] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.059941] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:06:20 up 17 min,  0 users,  load average: 0.10, 0.07, 0.02
	Linux old-k8s-version-914355 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000bf0540, 0x48ab5d6, 0x3, 0xc000b8cc30, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000bf0540, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b8cc30, 0x24, 0x0, ...)
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]: net.(*Dialer).DialContext(0xc00015c660, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b8cc30, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008d2420, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b8cc30, 0x24, 0x60, 0x7f76ec15fb88, 0x118, ...)
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]: net/http.(*Transport).dial(0xc00067a780, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b8cc30, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]: net/http.(*Transport).dialConn(0xc00067a780, 0x4f7fe00, 0xc000052030, 0x0, 0xc000bc2ea0, 0x5, 0xc000b8cc30, 0x24, 0x0, 0xc000987c20, ...)
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]: net/http.(*Transport).dialConnFor(0xc00067a780, 0xc0009dfad0)
	Jul 08 21:06:18 old-k8s-version-914355 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]: created by net/http.(*Transport).queueForDial
	Jul 08 21:06:18 old-k8s-version-914355 kubelet[6475]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 08 21:06:19 old-k8s-version-914355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 08 21:06:19 old-k8s-version-914355 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 08 21:06:19 old-k8s-version-914355 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 08 21:06:19 old-k8s-version-914355 kubelet[6484]: I0708 21:06:19.109427    6484 server.go:416] Version: v1.20.0
	Jul 08 21:06:19 old-k8s-version-914355 kubelet[6484]: I0708 21:06:19.110002    6484 server.go:837] Client rotation is on, will bootstrap in background
	Jul 08 21:06:19 old-k8s-version-914355 kubelet[6484]: I0708 21:06:19.112498    6484 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 08 21:06:19 old-k8s-version-914355 kubelet[6484]: W0708 21:06:19.113614    6484 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 08 21:06:19 old-k8s-version-914355 kubelet[6484]: I0708 21:06:19.113835    6484 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914355 -n old-k8s-version-914355
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 2 (241.077803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-914355" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-239931 -n embed-certs-239931
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-08 21:10:17.153767179 +0000 UTC m=+6077.066644980
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-239931 -n embed-certs-239931
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-239931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-239931 logs -n 25: (1.538012043s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-897827                                        | pause-897827                 | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:46 UTC |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| ssh     | cert-options-059722 ssh                                | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-059722 -- sudo                         | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-059722                                 | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-028021             | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-914355             | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-239931            | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-733920 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-733920                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:50 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028021                  | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071971  | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-239931                 | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071971       | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC | 08 Jul 24 21:01 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:53:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:53:37.291760   59655 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:53:37.291847   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.291851   59655 out.go:304] Setting ErrFile to fd 2...
	I0708 20:53:37.291855   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.292047   59655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:53:37.292558   59655 out.go:298] Setting JSON to false
	I0708 20:53:37.293434   59655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5766,"bootTime":1720466251,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:53:37.293485   59655 start.go:139] virtualization: kvm guest
	I0708 20:53:37.296412   59655 out.go:177] * [default-k8s-diff-port-071971] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:53:37.297727   59655 notify.go:220] Checking for updates...
	I0708 20:53:37.297756   59655 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:53:37.299168   59655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:53:37.300541   59655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:53:37.301818   59655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:53:37.303117   59655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:53:37.304266   59655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:53:37.305793   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:53:37.306182   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.306236   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.321719   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I0708 20:53:37.322090   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.322593   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.322617   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.322908   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.323093   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.323329   59655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:53:37.323638   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.323679   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.338244   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0708 20:53:37.338660   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.339118   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.339144   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.339463   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.339735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.374356   59655 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:53:37.375714   59655 start.go:297] selected driver: kvm2
	I0708 20:53:37.375729   59655 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.375866   59655 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:53:37.376843   59655 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.376918   59655 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:53:37.391219   59655 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:53:37.391602   59655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:53:37.391659   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:53:37.391672   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:53:37.391707   59655 start.go:340] cluster config:
	{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.391797   59655 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.393453   59655 out.go:177] * Starting "default-k8s-diff-port-071971" primary control-plane node in "default-k8s-diff-port-071971" cluster
	I0708 20:53:37.923695   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:40.995762   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:37.394736   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:53:37.394768   59655 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 20:53:37.394777   59655 cache.go:56] Caching tarball of preloaded images
	I0708 20:53:37.394849   59655 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:53:37.394860   59655 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 20:53:37.394962   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:53:37.395154   59655 start.go:360] acquireMachinesLock for default-k8s-diff-port-071971: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:53:47.075721   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:50.147727   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:56.227766   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:59.299738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:05.379699   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:08.451749   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:14.531759   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:17.603688   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:23.683730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:26.755738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:32.835706   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:35.907702   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:41.987722   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:45.059873   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:51.139726   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:54.211797   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:00.291730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:03.363720   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:09.443741   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:12.515718   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.358315   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:55:19.358408   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:55:19.359948   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:55:19.360000   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:55:19.360076   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:55:19.360217   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:55:19.360354   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:55:19.360443   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:55:19.362594   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:55:19.362671   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:55:19.362761   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:55:19.362915   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:55:19.362997   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:55:19.363087   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:55:19.363181   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:55:19.363271   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:55:19.363360   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:55:19.363470   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:55:19.363582   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:55:19.363636   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:55:19.363711   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:55:19.363781   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:55:19.363852   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:55:19.363941   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:55:19.364010   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:55:19.364135   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:55:19.364226   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:55:19.364276   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:55:19.364342   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:55:18.595786   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.366132   57466 out.go:204]   - Booting up control plane ...
	I0708 20:55:19.366219   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:55:19.366301   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:55:19.366364   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:55:19.366433   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:55:19.366579   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:55:19.366629   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:55:19.366692   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.366846   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.366909   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367070   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367133   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367285   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367344   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367511   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367575   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367735   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367743   57466 kubeadm.go:309] 
	I0708 20:55:19.367783   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:55:19.367817   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:55:19.367823   57466 kubeadm.go:309] 
	I0708 20:55:19.367851   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:55:19.367888   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:55:19.367991   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:55:19.368009   57466 kubeadm.go:309] 
	I0708 20:55:19.368127   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:55:19.368164   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:55:19.368192   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:55:19.368198   57466 kubeadm.go:309] 
	I0708 20:55:19.368284   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:55:19.368355   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:55:19.368362   57466 kubeadm.go:309] 
	I0708 20:55:19.368455   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:55:19.368539   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:55:19.368606   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:55:19.368666   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:55:19.368673   57466 kubeadm.go:309] 
	W0708 20:55:19.368784   57466 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0708 20:55:19.368831   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 20:55:19.838778   57466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:55:19.853958   57466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:55:19.863986   57466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:55:19.864010   57466 kubeadm.go:156] found existing configuration files:
	
	I0708 20:55:19.864055   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:55:19.873085   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:55:19.873147   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:55:19.882654   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:55:19.891579   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:55:19.891634   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:55:19.901397   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.910901   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:55:19.910976   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.920599   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:55:19.929826   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:55:19.929891   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:55:19.939284   57466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 20:55:20.153136   57466 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 20:55:21.667700   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:27.747756   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:30.819712   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:33.824320   59107 start.go:364] duration metric: took 3m48.54985296s to acquireMachinesLock for "embed-certs-239931"
	I0708 20:55:33.824375   59107 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:33.824390   59107 fix.go:54] fixHost starting: 
	I0708 20:55:33.824700   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:33.824728   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:33.839554   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0708 20:55:33.839987   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:33.840472   59107 main.go:141] libmachine: Using API Version  1
	I0708 20:55:33.840495   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:33.840844   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:33.841030   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:33.841194   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 20:55:33.842597   59107 fix.go:112] recreateIfNeeded on embed-certs-239931: state=Stopped err=<nil>
	I0708 20:55:33.842627   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	W0708 20:55:33.842787   59107 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:33.844574   59107 out.go:177] * Restarting existing kvm2 VM for "embed-certs-239931" ...
	I0708 20:55:33.845674   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Start
	I0708 20:55:33.845858   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring networks are active...
	I0708 20:55:33.846607   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network default is active
	I0708 20:55:33.846907   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network mk-embed-certs-239931 is active
	I0708 20:55:33.847329   59107 main.go:141] libmachine: (embed-certs-239931) Getting domain xml...
	I0708 20:55:33.848120   59107 main.go:141] libmachine: (embed-certs-239931) Creating domain...
	I0708 20:55:35.057523   59107 main.go:141] libmachine: (embed-certs-239931) Waiting to get IP...
	I0708 20:55:35.058300   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.058841   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.058870   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.058773   60083 retry.go:31] will retry after 280.969113ms: waiting for machine to come up
	I0708 20:55:33.821580   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:33.821617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.821932   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:55:33.821957   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.822166   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:55:33.824193   58678 machine.go:97] duration metric: took 4m37.421469682s to provisionDockerMachine
	I0708 20:55:33.824234   58678 fix.go:56] duration metric: took 4m37.444794791s for fixHost
	I0708 20:55:33.824241   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 4m37.44481517s
	W0708 20:55:33.824262   58678 start.go:713] error starting host: provision: host is not running
	W0708 20:55:33.824343   58678 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0708 20:55:33.824352   58678 start.go:728] Will try again in 5 seconds ...
	I0708 20:55:35.341327   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.341861   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.341882   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.341837   60083 retry.go:31] will retry after 333.972717ms: waiting for machine to come up
	I0708 20:55:35.677531   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.678035   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.678066   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.677984   60083 retry.go:31] will retry after 387.46652ms: waiting for machine to come up
	I0708 20:55:36.066618   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.067079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.067106   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.067044   60083 retry.go:31] will retry after 523.369614ms: waiting for machine to come up
	I0708 20:55:36.591863   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.592337   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.592363   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.592295   60083 retry.go:31] will retry after 670.675561ms: waiting for machine to come up
	I0708 20:55:37.264084   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:37.264521   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:37.264565   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:37.264485   60083 retry.go:31] will retry after 775.348922ms: waiting for machine to come up
	I0708 20:55:38.041398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:38.041860   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:38.041885   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:38.041801   60083 retry.go:31] will retry after 1.135585711s: waiting for machine to come up
	I0708 20:55:39.179405   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:39.179910   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:39.179938   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:39.179867   60083 retry.go:31] will retry after 1.422689354s: waiting for machine to come up
	I0708 20:55:38.826037   58678 start.go:360] acquireMachinesLock for no-preload-028021: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:55:40.603811   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:40.604240   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:40.604261   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:40.604199   60083 retry.go:31] will retry after 1.640612147s: waiting for machine to come up
	I0708 20:55:42.247230   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:42.247797   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:42.247837   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:42.247733   60083 retry.go:31] will retry after 2.031069729s: waiting for machine to come up
	I0708 20:55:44.280878   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:44.281419   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:44.281451   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:44.281355   60083 retry.go:31] will retry after 2.394813785s: waiting for machine to come up
	I0708 20:55:46.678897   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:46.679398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:46.679430   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:46.679357   60083 retry.go:31] will retry after 2.419242459s: waiting for machine to come up
	I0708 20:55:49.100362   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:49.100901   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:49.100964   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:49.100858   60083 retry.go:31] will retry after 4.241202363s: waiting for machine to come up
	I0708 20:55:54.868873   59655 start.go:364] duration metric: took 2m17.473689428s to acquireMachinesLock for "default-k8s-diff-port-071971"
	I0708 20:55:54.868970   59655 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:54.868991   59655 fix.go:54] fixHost starting: 
	I0708 20:55:54.869400   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:54.869432   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:54.888853   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44159
	I0708 20:55:54.889234   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:54.889674   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:55:54.889698   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:54.890009   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:54.890196   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:55:54.890332   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 20:55:54.891932   59655 fix.go:112] recreateIfNeeded on default-k8s-diff-port-071971: state=Stopped err=<nil>
	I0708 20:55:54.891972   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	W0708 20:55:54.892120   59655 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:54.894439   59655 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071971" ...
	I0708 20:55:53.347154   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.347587   59107 main.go:141] libmachine: (embed-certs-239931) Found IP for machine: 192.168.61.126
	I0708 20:55:53.347601   59107 main.go:141] libmachine: (embed-certs-239931) Reserving static IP address...
	I0708 20:55:53.347612   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has current primary IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.348084   59107 main.go:141] libmachine: (embed-certs-239931) Reserved static IP address: 192.168.61.126
	I0708 20:55:53.348106   59107 main.go:141] libmachine: (embed-certs-239931) Waiting for SSH to be available...
	I0708 20:55:53.348119   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.348136   59107 main.go:141] libmachine: (embed-certs-239931) DBG | skip adding static IP to network mk-embed-certs-239931 - found existing host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"}
	I0708 20:55:53.348148   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Getting to WaitForSSH function...
	I0708 20:55:53.350167   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350545   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.350583   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350651   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH client type: external
	I0708 20:55:53.350675   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa (-rw-------)
	I0708 20:55:53.350704   59107 main.go:141] libmachine: (embed-certs-239931) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:55:53.350722   59107 main.go:141] libmachine: (embed-certs-239931) DBG | About to run SSH command:
	I0708 20:55:53.350736   59107 main.go:141] libmachine: (embed-certs-239931) DBG | exit 0
	I0708 20:55:53.479934   59107 main.go:141] libmachine: (embed-certs-239931) DBG | SSH cmd err, output: <nil>: 
	I0708 20:55:53.480309   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetConfigRaw
	I0708 20:55:53.480891   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.483079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483399   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.483424   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483740   59107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/config.json ...
	I0708 20:55:53.483920   59107 machine.go:94] provisionDockerMachine start ...
	I0708 20:55:53.483937   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:53.484172   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.486461   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486772   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.486793   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.487075   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487253   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487385   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.487556   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.487774   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.487786   59107 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:55:53.600047   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:55:53.600078   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600308   59107 buildroot.go:166] provisioning hostname "embed-certs-239931"
	I0708 20:55:53.600342   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600508   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.603107   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603498   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.603529   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603728   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.603954   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604345   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.604512   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.604716   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.604737   59107 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-239931 && echo "embed-certs-239931" | sudo tee /etc/hostname
	I0708 20:55:53.734414   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-239931
	
	I0708 20:55:53.734457   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.737117   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737473   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.737501   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737640   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.737852   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738184   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.738355   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.738536   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.738558   59107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-239931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-239931/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-239931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:55:53.860753   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:53.860781   59107 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:55:53.860799   59107 buildroot.go:174] setting up certificates
	I0708 20:55:53.860808   59107 provision.go:84] configureAuth start
	I0708 20:55:53.860816   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.861070   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.863652   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.863999   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.864018   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.864221   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.866207   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866480   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.866504   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866613   59107 provision.go:143] copyHostCerts
	I0708 20:55:53.866671   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:55:53.866680   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:55:53.866741   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:55:53.866837   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:55:53.866845   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:55:53.866868   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:55:53.866932   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:55:53.866939   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:55:53.866959   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:55:53.867017   59107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.embed-certs-239931 san=[127.0.0.1 192.168.61.126 embed-certs-239931 localhost minikube]
	I0708 20:55:54.171765   59107 provision.go:177] copyRemoteCerts
	I0708 20:55:54.171835   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:55:54.171859   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.174341   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174621   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.174650   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174762   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.174957   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.175129   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.175280   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.262051   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:55:54.287118   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:55:54.310071   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:55:54.337811   59107 provision.go:87] duration metric: took 476.990356ms to configureAuth
	I0708 20:55:54.337851   59107 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:55:54.338077   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:55:54.338147   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.340972   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341259   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.341296   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341423   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.341720   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.341870   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.342006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.342147   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.342332   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.342350   59107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:55:54.618752   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:55:54.618775   59107 machine.go:97] duration metric: took 1.134844127s to provisionDockerMachine
	I0708 20:55:54.618786   59107 start.go:293] postStartSetup for "embed-certs-239931" (driver="kvm2")
	I0708 20:55:54.618795   59107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:55:54.618823   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.619220   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:55:54.619249   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.621857   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622144   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.622168   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622348   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.622532   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.622703   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.622853   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.710096   59107 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:55:54.714437   59107 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:55:54.714458   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:55:54.714524   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:55:54.714597   59107 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:55:54.714679   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:55:54.724350   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:55:54.748078   59107 start.go:296] duration metric: took 129.264358ms for postStartSetup
	I0708 20:55:54.748120   59107 fix.go:56] duration metric: took 20.923736253s for fixHost
	I0708 20:55:54.748138   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.750818   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751200   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.751227   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751377   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.751611   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751763   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751879   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.752034   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.752240   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.752256   59107 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:55:54.868663   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472154.844724958
	
	I0708 20:55:54.868694   59107 fix.go:216] guest clock: 1720472154.844724958
	I0708 20:55:54.868706   59107 fix.go:229] Guest: 2024-07-08 20:55:54.844724958 +0000 UTC Remote: 2024-07-08 20:55:54.748123056 +0000 UTC m=+249.617599643 (delta=96.601902ms)
	I0708 20:55:54.868764   59107 fix.go:200] guest clock delta is within tolerance: 96.601902ms
	I0708 20:55:54.868776   59107 start.go:83] releasing machines lock for "embed-certs-239931", held for 21.044425411s
	I0708 20:55:54.868811   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.869092   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:54.871867   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872252   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.872295   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872451   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.872921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873060   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873151   59107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:55:54.873196   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.873271   59107 ssh_runner.go:195] Run: cat /version.json
	I0708 20:55:54.873297   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.876118   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876142   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876614   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876641   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876682   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876699   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.876903   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.877017   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877193   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877266   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877349   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.877424   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.984516   59107 ssh_runner.go:195] Run: systemctl --version
	I0708 20:55:54.990926   59107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:55:55.142010   59107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:55:55.148138   59107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:55:55.148203   59107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:55:55.164086   59107 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:55:55.164111   59107 start.go:494] detecting cgroup driver to use...
	I0708 20:55:55.164204   59107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:55:55.184836   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:55:55.204002   59107 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:55:55.204079   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:55:55.218405   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:55:55.233462   59107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:55:55.357278   59107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:55:55.521141   59107 docker.go:233] disabling docker service ...
	I0708 20:55:55.521218   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:55:55.538949   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:55:55.558613   59107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:55:55.696926   59107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:55:55.819810   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:55:55.837012   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:55:55.856417   59107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:55:55.856497   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.868488   59107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:55:55.868556   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.879503   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.891183   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.901872   59107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:55:55.914498   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.925676   59107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.944340   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.955961   59107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:55:55.965785   59107 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:55:55.965845   59107 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:55:55.979853   59107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:55:55.989331   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:55:56.108476   59107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:55:56.262396   59107 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:55:56.262463   59107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:55:56.267682   59107 start.go:562] Will wait 60s for crictl version
	I0708 20:55:56.267740   59107 ssh_runner.go:195] Run: which crictl
	I0708 20:55:56.273115   59107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:55:56.323276   59107 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:55:56.323364   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.352650   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.394502   59107 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:55:54.895951   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Start
	I0708 20:55:54.896150   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring networks are active...
	I0708 20:55:54.896971   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network default is active
	I0708 20:55:54.897344   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network mk-default-k8s-diff-port-071971 is active
	I0708 20:55:54.897672   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Getting domain xml...
	I0708 20:55:54.898340   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Creating domain...
	I0708 20:55:56.182187   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting to get IP...
	I0708 20:55:56.183209   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183699   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183759   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.183663   60221 retry.go:31] will retry after 255.382138ms: waiting for machine to come up
	I0708 20:55:56.441290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441760   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441789   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.441718   60221 retry.go:31] will retry after 363.116234ms: waiting for machine to come up
	I0708 20:55:56.806418   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806871   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806899   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.806819   60221 retry.go:31] will retry after 392.319836ms: waiting for machine to come up
	I0708 20:55:57.200645   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201176   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.201095   60221 retry.go:31] will retry after 528.490844ms: waiting for machine to come up
	I0708 20:55:56.395778   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:56.398458   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.398826   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:56.398853   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.399088   59107 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0708 20:55:56.403789   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:55:56.418081   59107 kubeadm.go:877] updating cluster {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:55:56.418244   59107 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:55:56.418312   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:55:56.459969   59107 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:55:56.460034   59107 ssh_runner.go:195] Run: which lz4
	I0708 20:55:56.464561   59107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:55:56.469087   59107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:55:56.469130   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:55:58.010716   59107 crio.go:462] duration metric: took 1.546186223s to copy over tarball
	I0708 20:55:58.010782   59107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:55:57.731640   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732172   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732223   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.732129   60221 retry.go:31] will retry after 554.611559ms: waiting for machine to come up
	I0708 20:55:58.287924   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288557   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.288491   60221 retry.go:31] will retry after 642.466107ms: waiting for machine to come up
	I0708 20:55:58.932485   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933032   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.932958   60221 retry.go:31] will retry after 999.83146ms: waiting for machine to come up
	I0708 20:55:59.934050   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934618   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934664   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:59.934571   60221 retry.go:31] will retry after 1.069868254s: waiting for machine to come up
	I0708 20:56:01.006049   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006594   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:01.006506   60221 retry.go:31] will retry after 1.182777891s: waiting for machine to come up
	I0708 20:56:02.191001   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191460   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191483   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:02.191418   60221 retry.go:31] will retry after 1.559742627s: waiting for machine to come up
	I0708 20:56:00.267199   59107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256392679s)
	I0708 20:56:00.267230   59107 crio.go:469] duration metric: took 2.256489175s to extract the tarball
	I0708 20:56:00.267240   59107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:00.305692   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:00.346669   59107 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:00.346694   59107 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:00.346703   59107 kubeadm.go:928] updating node { 192.168.61.126 8443 v1.30.2 crio true true} ...
	I0708 20:56:00.346804   59107 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-239931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:00.346868   59107 ssh_runner.go:195] Run: crio config
	I0708 20:56:00.392577   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:00.392597   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:00.392608   59107 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:00.392637   59107 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-239931 NodeName:embed-certs-239931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:00.392814   59107 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-239931"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:00.392894   59107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:00.403593   59107 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:00.403675   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:00.413449   59107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0708 20:56:00.430407   59107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:00.447599   59107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0708 20:56:00.465525   59107 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:00.469912   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:00.483255   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:00.623802   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:00.642946   59107 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931 for IP: 192.168.61.126
	I0708 20:56:00.642967   59107 certs.go:194] generating shared ca certs ...
	I0708 20:56:00.642982   59107 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:00.643143   59107 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:00.643184   59107 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:00.643193   59107 certs.go:256] generating profile certs ...
	I0708 20:56:00.643270   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/client.key
	I0708 20:56:00.643317   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key.7743ab88
	I0708 20:56:00.643354   59107 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key
	I0708 20:56:00.643487   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:00.643524   59107 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:00.643533   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:00.643556   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:00.643579   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:00.643604   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:00.643670   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:00.644353   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:00.699260   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:00.752536   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:00.783946   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:00.812524   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0708 20:56:00.843035   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:00.872061   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:00.898805   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 20:56:00.925402   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:00.952114   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:00.984067   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:01.010037   59107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:01.027599   59107 ssh_runner.go:195] Run: openssl version
	I0708 20:56:01.033942   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:01.046273   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051807   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051887   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.058482   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:01.070774   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:01.083215   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088389   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088460   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.094594   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:01.107360   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:01.119973   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125011   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125085   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.131596   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:01.143993   59107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:01.149299   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:01.156201   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:01.162939   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:01.169874   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:01.176264   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:01.182905   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:01.189961   59107 kubeadm.go:391] StartCluster: {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:01.190041   59107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:01.190085   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.238097   59107 cri.go:89] found id: ""
	I0708 20:56:01.238167   59107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:01.250478   59107 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:01.250503   59107 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:01.250509   59107 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:01.250562   59107 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:01.261664   59107 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:01.262667   59107 kubeconfig.go:125] found "embed-certs-239931" server: "https://192.168.61.126:8443"
	I0708 20:56:01.264788   59107 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:01.275846   59107 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.126
	I0708 20:56:01.275888   59107 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:01.275908   59107 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:01.276006   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.318646   59107 cri.go:89] found id: ""
	I0708 20:56:01.318745   59107 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:01.340273   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:01.353325   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:01.353360   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:01.353412   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:01.363659   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:01.363732   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:01.374340   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:01.384284   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:01.384352   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:01.394981   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.405532   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:01.405604   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.416741   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:01.427724   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:01.427812   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:01.439352   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:01.451286   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:01.581829   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.013995   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.432133224s)
	I0708 20:56:03.014024   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.229195   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.305328   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.415409   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:03.415508   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:03.916187   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.416389   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.489450   59107 api_server.go:72] duration metric: took 1.074041899s to wait for apiserver process to appear ...
	I0708 20:56:04.489482   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:04.489516   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:04.490133   59107 api_server.go:269] stopped: https://192.168.61.126:8443/healthz: Get "https://192.168.61.126:8443/healthz": dial tcp 192.168.61.126:8443: connect: connection refused
	I0708 20:56:04.989698   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:03.753446   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.753998   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.754026   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:03.753954   60221 retry.go:31] will retry after 1.922949894s: waiting for machine to come up
	I0708 20:56:05.679244   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679831   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679860   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:05.679794   60221 retry.go:31] will retry after 3.531627288s: waiting for machine to come up
	I0708 20:56:07.673375   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:07.673404   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:07.673420   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.776516   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.776551   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:07.989668   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.996865   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.996897   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.490538   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:08.496342   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:08.496374   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.990583   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:09.001043   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 20:56:09.011126   59107 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:09.011160   59107 api_server.go:131] duration metric: took 4.521668725s to wait for apiserver health ...
	I0708 20:56:09.011171   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:09.011179   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:09.012842   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:09.014197   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:09.041325   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:09.073110   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:09.086225   59107 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:09.086265   59107 system_pods.go:61] "coredns-7db6d8ff4d-wnqsl" [868e66bf-9f86-465f-aad1-d11a6d218ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:09.086276   59107 system_pods.go:61] "etcd-embed-certs-239931" [48815314-6e48-4fe0-b7b1-4a1d2f6610d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:09.086286   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [665311f4-d633-4b93-ae8c-2b43b45fff68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:09.086294   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [4ab6d657-8c74-491c-b965-ac68f2bd323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:09.086309   59107 system_pods.go:61] "kube-proxy-5h5xl" [9b169148-aa75-40a2-b08b-1d579ee179b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 20:56:09.086316   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [012399d8-10a4-407d-a899-3c840dd52ca8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:09.086331   59107 system_pods.go:61] "metrics-server-569cc877fc-h4btg" [c78cfc3c-159f-4a06-b4a0-63f8bd0a6703] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:09.086339   59107 system_pods.go:61] "storage-provisioner" [2ca0ea1d-5d1c-4e18-a871-e035a8946b3c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 20:56:09.086348   59107 system_pods.go:74] duration metric: took 13.216051ms to wait for pod list to return data ...
	I0708 20:56:09.086363   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:09.089689   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:09.089719   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:09.089732   59107 node_conditions.go:105] duration metric: took 3.363611ms to run NodePressure ...
	I0708 20:56:09.089751   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:09.377271   59107 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383148   59107 kubeadm.go:733] kubelet initialised
	I0708 20:56:09.383174   59107 kubeadm.go:734] duration metric: took 5.876526ms waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383183   59107 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:09.388903   59107 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:09.214856   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215410   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:09.215355   60221 retry.go:31] will retry after 3.64169465s: waiting for machine to come up
	I0708 20:56:14.180834   58678 start.go:364] duration metric: took 35.354748041s to acquireMachinesLock for "no-preload-028021"
	I0708 20:56:14.180893   58678 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:56:14.180905   58678 fix.go:54] fixHost starting: 
	I0708 20:56:14.181259   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:56:14.181299   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:56:14.197712   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I0708 20:56:14.198157   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:56:14.198615   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:56:14.198637   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:56:14.198996   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:56:14.199173   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:14.199342   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:56:14.200905   58678 fix.go:112] recreateIfNeeded on no-preload-028021: state=Stopped err=<nil>
	I0708 20:56:14.200930   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	W0708 20:56:14.201103   58678 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:56:14.203062   58678 out.go:177] * Restarting existing kvm2 VM for "no-preload-028021" ...
	I0708 20:56:11.396453   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:13.396672   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:12.860535   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.860988   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Found IP for machine: 192.168.72.163
	I0708 20:56:12.861010   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserving static IP address...
	I0708 20:56:12.861027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has current primary IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.861445   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.861473   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserved static IP address: 192.168.72.163
	I0708 20:56:12.861494   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | skip adding static IP to network mk-default-k8s-diff-port-071971 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"}
	I0708 20:56:12.861515   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Getting to WaitForSSH function...
	I0708 20:56:12.861531   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for SSH to be available...
	I0708 20:56:12.864099   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864436   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.864465   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864631   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH client type: external
	I0708 20:56:12.864663   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa (-rw-------)
	I0708 20:56:12.864693   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:12.864708   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | About to run SSH command:
	I0708 20:56:12.864721   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | exit 0
	I0708 20:56:12.996077   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:12.996459   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetConfigRaw
	I0708 20:56:12.997091   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:12.999431   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.999815   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.999844   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.000145   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:56:13.000354   59655 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:13.000377   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.000558   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.002898   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003255   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.003290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003444   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.003626   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003778   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003930   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.004094   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.004297   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.004311   59655 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:13.119929   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:13.119956   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120203   59655 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071971"
	I0708 20:56:13.120320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120544   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.123750   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.124256   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.124647   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124993   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.125155   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.125339   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.125360   59655 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071971 && echo "default-k8s-diff-port-071971" | sudo tee /etc/hostname
	I0708 20:56:13.256165   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071971
	
	I0708 20:56:13.256199   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.258991   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259342   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.259376   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.259828   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260011   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260149   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.260325   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.260506   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.260530   59655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071971/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:13.381593   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:13.381627   59655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:13.381684   59655 buildroot.go:174] setting up certificates
	I0708 20:56:13.381700   59655 provision.go:84] configureAuth start
	I0708 20:56:13.381716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.382023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:13.385065   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385358   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.385394   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385566   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.387752   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388092   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.388132   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388290   59655 provision.go:143] copyHostCerts
	I0708 20:56:13.388350   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:13.388361   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:13.388408   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:13.388506   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:13.388516   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:13.388536   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:13.388587   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:13.388593   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:13.388610   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:13.389123   59655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071971 san=[127.0.0.1 192.168.72.163 default-k8s-diff-port-071971 localhost minikube]
	I0708 20:56:13.445451   59655 provision.go:177] copyRemoteCerts
	I0708 20:56:13.445509   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:13.445536   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.448926   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449291   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.449320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449579   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.449785   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.449944   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.450097   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:13.542311   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0708 20:56:13.570585   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 20:56:13.597943   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:13.623837   59655 provision.go:87] duration metric: took 242.102893ms to configureAuth
	I0708 20:56:13.623874   59655 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:13.624077   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:13.624144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.626802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627247   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.627277   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627553   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.627738   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.627910   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.628047   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.628214   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.628414   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.628442   59655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:13.930321   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:13.930349   59655 machine.go:97] duration metric: took 929.979999ms to provisionDockerMachine
	I0708 20:56:13.930361   59655 start.go:293] postStartSetup for "default-k8s-diff-port-071971" (driver="kvm2")
	I0708 20:56:13.930371   59655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:13.930385   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.930714   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:13.930747   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.933397   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933704   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.933735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933927   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.934119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.934266   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.934393   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.019603   59655 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:14.024556   59655 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:14.024589   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:14.024651   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:14.024744   59655 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:14.024836   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:14.035798   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:14.062351   59655 start.go:296] duration metric: took 131.974167ms for postStartSetup
	I0708 20:56:14.062402   59655 fix.go:56] duration metric: took 19.193418124s for fixHost
	I0708 20:56:14.062428   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.065264   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.065784   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.065822   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.066027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.066271   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.066965   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:14.067197   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:14.067210   59655 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:14.180654   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472174.151879540
	
	I0708 20:56:14.180683   59655 fix.go:216] guest clock: 1720472174.151879540
	I0708 20:56:14.180695   59655 fix.go:229] Guest: 2024-07-08 20:56:14.15187954 +0000 UTC Remote: 2024-07-08 20:56:14.062408788 +0000 UTC m=+156.804206336 (delta=89.470752ms)
	I0708 20:56:14.180751   59655 fix.go:200] guest clock delta is within tolerance: 89.470752ms
	I0708 20:56:14.180757   59655 start.go:83] releasing machines lock for "default-k8s-diff-port-071971", held for 19.311816598s
	I0708 20:56:14.180802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.181119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:14.183833   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184164   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.184194   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184365   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.184862   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185029   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185105   59655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:14.185152   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.185222   59655 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:14.185248   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.187788   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188135   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188167   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188299   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188328   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188501   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188505   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188641   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.188715   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188803   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.188885   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.189022   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.298253   59655 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:14.305004   59655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:14.457540   59655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:14.464497   59655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:14.464567   59655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:14.482063   59655 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:14.482093   59655 start.go:494] detecting cgroup driver to use...
	I0708 20:56:14.482172   59655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:14.500206   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:14.515905   59655 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:14.515952   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:14.532277   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:14.552772   59655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:14.686229   59655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:14.845428   59655 docker.go:233] disabling docker service ...
	I0708 20:56:14.845496   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:14.863157   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:14.881174   59655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:15.029269   59655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:15.165105   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:15.181619   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:15.202743   59655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:15.202806   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.215848   59655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:15.215925   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.228697   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.240964   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.257002   59655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:15.270309   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.283215   59655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.303235   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.322364   59655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:15.340757   59655 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:15.340836   59655 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:15.360592   59655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:15.372486   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:15.510210   59655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:15.656090   59655 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:15.656169   59655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:15.661847   59655 start.go:562] Will wait 60s for crictl version
	I0708 20:56:15.661917   59655 ssh_runner.go:195] Run: which crictl
	I0708 20:56:15.666004   59655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:15.707842   59655 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:15.707928   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.740434   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.772450   59655 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:14.204596   58678 main.go:141] libmachine: (no-preload-028021) Calling .Start
	I0708 20:56:14.204780   58678 main.go:141] libmachine: (no-preload-028021) Ensuring networks are active...
	I0708 20:56:14.205463   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network default is active
	I0708 20:56:14.205799   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network mk-no-preload-028021 is active
	I0708 20:56:14.206280   58678 main.go:141] libmachine: (no-preload-028021) Getting domain xml...
	I0708 20:56:14.207187   58678 main.go:141] libmachine: (no-preload-028021) Creating domain...
	I0708 20:56:15.514100   58678 main.go:141] libmachine: (no-preload-028021) Waiting to get IP...
	I0708 20:56:15.514946   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.515419   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.515473   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.515397   60369 retry.go:31] will retry after 282.59763ms: waiting for machine to come up
	I0708 20:56:15.799976   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.800525   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.800555   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.800482   60369 retry.go:31] will retry after 377.094067ms: waiting for machine to come up
	I0708 20:56:16.179257   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.179953   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.179979   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.179861   60369 retry.go:31] will retry after 433.953923ms: waiting for machine to come up
	I0708 20:56:15.773711   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:15.776947   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777368   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:15.777400   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777704   59655 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:15.782466   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:15.796924   59655 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:15.797072   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:15.797138   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:15.841838   59655 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:15.841922   59655 ssh_runner.go:195] Run: which lz4
	I0708 20:56:15.846443   59655 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:56:15.851267   59655 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:56:15.851302   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:56:15.397039   59107 pod_ready.go:92] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:15.397070   59107 pod_ready.go:81] duration metric: took 6.008141421s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:15.397082   59107 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405606   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.405638   59107 pod_ready.go:81] duration metric: took 2.008547358s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405653   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411786   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.411810   59107 pod_ready.go:81] duration metric: took 6.14625ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411822   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421681   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.421712   59107 pod_ready.go:81] duration metric: took 2.009879259s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421725   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428235   59107 pod_ready.go:92] pod "kube-proxy-5h5xl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.428260   59107 pod_ready.go:81] duration metric: took 6.527896ms for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428269   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433130   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.433154   59107 pod_ready.go:81] duration metric: took 4.87807ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433163   59107 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:16.615670   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.616225   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.616257   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.616177   60369 retry.go:31] will retry after 489.658115ms: waiting for machine to come up
	I0708 20:56:17.107848   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.108391   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.108420   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.108341   60369 retry.go:31] will retry after 620.239043ms: waiting for machine to come up
	I0708 20:56:17.730239   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.730822   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.730854   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.730758   60369 retry.go:31] will retry after 818.379867ms: waiting for machine to come up
	I0708 20:56:18.550539   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:18.551024   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:18.551049   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:18.550993   60369 retry.go:31] will retry after 1.138596155s: waiting for machine to come up
	I0708 20:56:19.691669   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:19.692214   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:19.692267   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:19.692149   60369 retry.go:31] will retry after 1.467771065s: waiting for machine to come up
	I0708 20:56:21.161367   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:21.161916   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:21.161945   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:21.161854   60369 retry.go:31] will retry after 1.592022559s: waiting for machine to come up
	I0708 20:56:17.447251   59655 crio.go:462] duration metric: took 1.600850063s to copy over tarball
	I0708 20:56:17.447341   59655 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:56:19.773249   59655 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.325874804s)
	I0708 20:56:19.773277   59655 crio.go:469] duration metric: took 2.325993304s to extract the tarball
	I0708 20:56:19.773286   59655 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:19.811911   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:19.859029   59655 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:19.859060   59655 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:19.859070   59655 kubeadm.go:928] updating node { 192.168.72.163 8444 v1.30.2 crio true true} ...
	I0708 20:56:19.859208   59655 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-071971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:19.859281   59655 ssh_runner.go:195] Run: crio config
	I0708 20:56:19.905778   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:19.905806   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:19.905822   59655 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:19.905847   59655 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.163 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071971 NodeName:default-k8s-diff-port-071971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:19.906035   59655 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.163
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071971"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:19.906113   59655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:19.916307   59655 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:19.916388   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:19.926496   59655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0708 20:56:19.947778   59655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:19.969466   59655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0708 20:56:19.991103   59655 ssh_runner.go:195] Run: grep 192.168.72.163	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:19.995180   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:20.008005   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:20.143869   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:20.162694   59655 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971 for IP: 192.168.72.163
	I0708 20:56:20.162713   59655 certs.go:194] generating shared ca certs ...
	I0708 20:56:20.162745   59655 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:20.162930   59655 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:20.162986   59655 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:20.162997   59655 certs.go:256] generating profile certs ...
	I0708 20:56:20.163097   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.key
	I0708 20:56:20.163220   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key.17bd30e8
	I0708 20:56:20.163259   59655 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key
	I0708 20:56:20.163394   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:20.163478   59655 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:20.163493   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:20.163524   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:20.163558   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:20.163594   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:20.163659   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:20.164318   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:20.198987   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:20.251872   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:20.281444   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:20.305751   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0708 20:56:20.332608   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 20:56:20.365206   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:20.399631   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:20.430016   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:20.462126   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:20.492669   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:20.521867   59655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:20.540725   59655 ssh_runner.go:195] Run: openssl version
	I0708 20:56:20.546789   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:20.558515   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563342   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563430   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.570039   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:20.585367   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:20.601217   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605930   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605993   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.612015   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:20.623796   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:20.635305   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640571   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640649   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.648600   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:20.663899   59655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:20.669383   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:20.675967   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:20.682513   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:20.690280   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:20.698720   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:20.705678   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:20.712524   59655 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:20.712643   59655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:20.712700   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.761032   59655 cri.go:89] found id: ""
	I0708 20:56:20.761107   59655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:20.772712   59655 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:20.772736   59655 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:20.772742   59655 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:20.772793   59655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:20.784860   59655 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:20.785974   59655 kubeconfig.go:125] found "default-k8s-diff-port-071971" server: "https://192.168.72.163:8444"
	I0708 20:56:20.788290   59655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:20.799889   59655 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.163
	I0708 20:56:20.799919   59655 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:20.799947   59655 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:20.800011   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.846864   59655 cri.go:89] found id: ""
	I0708 20:56:20.846936   59655 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:20.865883   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:20.877476   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:20.877495   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:20.877548   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 20:56:20.889786   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:20.889853   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:20.902185   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 20:56:20.913510   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:20.913573   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:20.923964   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.934048   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:20.934131   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.945078   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 20:56:20.955290   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:20.955354   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:20.966182   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:20.977508   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.319213   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.511204   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:23.942367   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:22.755738   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:22.756182   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:22.756243   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:22.756167   60369 retry.go:31] will retry after 1.858003233s: waiting for machine to come up
	I0708 20:56:24.616152   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:24.616674   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:24.616703   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:24.616618   60369 retry.go:31] will retry after 2.203640369s: waiting for machine to come up
	I0708 20:56:22.471504   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.152252924s)
	I0708 20:56:22.471539   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.692407   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.756884   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.892773   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:22.892888   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.393789   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.893298   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.941073   59655 api_server.go:72] duration metric: took 1.048301169s to wait for apiserver process to appear ...
	I0708 20:56:23.941100   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:23.941131   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.221991   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:27.222029   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:27.222048   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:26.441670   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:28.939138   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:27.353017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.353069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.442130   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.447304   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.447326   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.941979   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.951850   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.951878   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.441380   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.452031   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.452069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.941613   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.946045   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.946084   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.441485   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.448847   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.448877   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.941906   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.946380   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.946416   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:30.442013   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:30.447291   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 20:56:30.454664   59655 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:30.454693   59655 api_server.go:131] duration metric: took 6.513586414s to wait for apiserver health ...
	I0708 20:56:30.454701   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:30.454707   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:30.456577   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:26.821665   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:26.822266   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:26.822297   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:26.822209   60369 retry.go:31] will retry after 3.478824168s: waiting for machine to come up
	I0708 20:56:30.302329   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:30.302766   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:30.302796   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:30.302707   60369 retry.go:31] will retry after 3.597512692s: waiting for machine to come up
	I0708 20:56:30.458168   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:30.469918   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:30.492348   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:30.503174   59655 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:30.503210   59655 system_pods.go:61] "coredns-7db6d8ff4d-c4tzw" [e5ea7dde-1134-45d0-b3e2-176e6a8f068e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:30.503218   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [693fd668-83c2-43e6-bf43-7b1a9e654db0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:30.503226   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [eadde33a-b967-4a58-9730-d163e6b8c0c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:30.503233   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [99bd8e55-e0a9-4071-a0f0-dc9d1e79b58d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:30.503238   59655 system_pods.go:61] "kube-proxy-vq4l8" [e2a4779c-e8ed-4f5b-872b-d10604936178] Running
	I0708 20:56:30.503244   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [af6b0a79-be1e-4caa-86a6-47ac782ac438] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:30.503250   59655 system_pods.go:61] "metrics-server-569cc877fc-h2dzd" [7075aa8e-0716-4965-8a13-3ed804190b3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:30.503257   59655 system_pods.go:61] "storage-provisioner" [9fca5ac9-cd65-4257-b012-20ded80a39a5] Running
	I0708 20:56:30.503265   59655 system_pods.go:74] duration metric: took 10.887672ms to wait for pod list to return data ...
	I0708 20:56:30.503279   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:30.509137   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:30.509170   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:30.509189   59655 node_conditions.go:105] duration metric: took 5.903588ms to run NodePressure ...
	I0708 20:56:30.509210   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:30.780430   59655 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788138   59655 kubeadm.go:733] kubelet initialised
	I0708 20:56:30.788168   59655 kubeadm.go:734] duration metric: took 7.711132ms waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788177   59655 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:30.797001   59655 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:30.939824   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:32.940860   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:34.941652   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:33.901849   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902332   58678 main.go:141] libmachine: (no-preload-028021) Found IP for machine: 192.168.39.108
	I0708 20:56:33.902356   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has current primary IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902361   58678 main.go:141] libmachine: (no-preload-028021) Reserving static IP address...
	I0708 20:56:33.902766   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.902797   58678 main.go:141] libmachine: (no-preload-028021) DBG | skip adding static IP to network mk-no-preload-028021 - found existing host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"}
	I0708 20:56:33.902808   58678 main.go:141] libmachine: (no-preload-028021) Reserved static IP address: 192.168.39.108
	I0708 20:56:33.902825   58678 main.go:141] libmachine: (no-preload-028021) Waiting for SSH to be available...
	I0708 20:56:33.902835   58678 main.go:141] libmachine: (no-preload-028021) DBG | Getting to WaitForSSH function...
	I0708 20:56:33.905031   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905318   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.905341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905479   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH client type: external
	I0708 20:56:33.905509   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa (-rw-------)
	I0708 20:56:33.905535   58678 main.go:141] libmachine: (no-preload-028021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:33.905560   58678 main.go:141] libmachine: (no-preload-028021) DBG | About to run SSH command:
	I0708 20:56:33.905573   58678 main.go:141] libmachine: (no-preload-028021) DBG | exit 0
	I0708 20:56:34.035510   58678 main.go:141] libmachine: (no-preload-028021) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:34.035876   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetConfigRaw
	I0708 20:56:34.036501   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.039070   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039467   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.039496   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039711   58678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/config.json ...
	I0708 20:56:34.039936   58678 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:34.039955   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:34.040191   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.042269   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042640   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.042666   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042793   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.042954   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043125   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043292   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.043496   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.043662   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.043671   58678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:34.156092   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:34.156143   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156412   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:56:34.156441   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156639   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.159015   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159420   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.159467   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159606   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.159817   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160015   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160214   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.160407   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.160572   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.160584   58678 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-028021 && echo "no-preload-028021" | sudo tee /etc/hostname
	I0708 20:56:34.286222   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-028021
	
	I0708 20:56:34.286250   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.289067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289376   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.289399   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.289832   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.289991   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.290129   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.290310   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.290471   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.290485   58678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-028021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-028021/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-028021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:34.414724   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:34.414749   58678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:34.414790   58678 buildroot.go:174] setting up certificates
	I0708 20:56:34.414799   58678 provision.go:84] configureAuth start
	I0708 20:56:34.414808   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.415115   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.417919   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418241   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.418273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418491   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.421129   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421603   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.421634   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421756   58678 provision.go:143] copyHostCerts
	I0708 20:56:34.421818   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:34.421839   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:34.421906   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:34.422023   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:34.422034   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:34.422064   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:34.422151   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:34.422161   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:34.422196   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:34.422276   58678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.no-preload-028021 san=[127.0.0.1 192.168.39.108 localhost minikube no-preload-028021]
	I0708 20:56:34.634189   58678 provision.go:177] copyRemoteCerts
	I0708 20:56:34.634253   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:34.634281   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.637123   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637364   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.637396   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637609   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.637912   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.638172   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.638410   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:34.726761   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:34.751947   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:56:34.776165   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:56:34.803849   58678 provision.go:87] duration metric: took 389.036659ms to configureAuth
	I0708 20:56:34.803880   58678 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:34.804125   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:34.804202   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.808559   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.808925   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.808966   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.809164   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.809416   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809572   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809710   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.809874   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.810069   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.810097   58678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:35.096796   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:35.096822   58678 machine.go:97] duration metric: took 1.056870853s to provisionDockerMachine
	I0708 20:56:35.096834   58678 start.go:293] postStartSetup for "no-preload-028021" (driver="kvm2")
	I0708 20:56:35.096847   58678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:35.096864   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.097227   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:35.097266   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.100040   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100428   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.100449   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100637   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.100826   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.100967   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.101128   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.187796   58678 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:35.192221   58678 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:35.192248   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:35.192315   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:35.192383   58678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:35.192467   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:35.204227   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:35.230404   58678 start.go:296] duration metric: took 133.555408ms for postStartSetup
	I0708 20:56:35.230446   58678 fix.go:56] duration metric: took 21.04954132s for fixHost
	I0708 20:56:35.230464   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.233341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233654   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.233685   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233878   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.234070   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234248   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234413   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.234611   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:35.234834   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:35.234849   58678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:35.348439   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472195.300246165
	
	I0708 20:56:35.348459   58678 fix.go:216] guest clock: 1720472195.300246165
	I0708 20:56:35.348468   58678 fix.go:229] Guest: 2024-07-08 20:56:35.300246165 +0000 UTC Remote: 2024-07-08 20:56:35.230449891 +0000 UTC m=+338.995803708 (delta=69.796274ms)
	I0708 20:56:35.348487   58678 fix.go:200] guest clock delta is within tolerance: 69.796274ms
	I0708 20:56:35.348492   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 21.167624903s
	I0708 20:56:35.348509   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.348752   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:35.351300   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351779   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.351806   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351977   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352557   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352725   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352799   58678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:35.352839   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.352942   58678 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:35.352969   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.355646   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356037   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356071   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356117   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356267   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356470   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.356555   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356580   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356642   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.356706   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356770   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.356885   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.357020   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.357154   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.438344   58678 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:35.470518   58678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:35.628022   58678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:35.636390   58678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:35.636468   58678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:35.654729   58678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:35.654753   58678 start.go:494] detecting cgroup driver to use...
	I0708 20:56:35.654824   58678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:35.678564   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:35.697122   58678 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:35.697202   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:35.713388   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:35.728254   58678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:35.874433   58678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:36.062521   58678 docker.go:233] disabling docker service ...
	I0708 20:56:36.062614   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:36.081225   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:36.096855   58678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:36.229455   58678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:36.375525   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:36.390772   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:36.411762   58678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:36.411905   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.423149   58678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:36.423218   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.434145   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.447568   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.458758   58678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:36.469393   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.479663   58678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.501298   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.512407   58678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:36.522400   58678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:36.522469   58678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:36.536310   58678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:36.547955   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:36.680408   58678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:36.860344   58678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:36.860416   58678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:36.866153   58678 start.go:562] Will wait 60s for crictl version
	I0708 20:56:36.866221   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:36.871623   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:36.917564   58678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:36.917655   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.954595   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.985788   58678 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:32.805051   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:35.303979   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.303556   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.303581   59655 pod_ready.go:81] duration metric: took 5.506548207s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.303590   59655 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308571   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.308596   59655 pod_ready.go:81] duration metric: took 4.998994ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308610   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314379   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.314402   59655 pod_ready.go:81] duration metric: took 5.784289ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314411   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.942775   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:39.440483   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.987568   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:36.990699   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991105   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:36.991146   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991378   58678 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:36.996102   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:37.012228   58678 kubeadm.go:877] updating cluster {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:37.012390   58678 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:37.012439   58678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:37.050690   58678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:37.050715   58678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 20:56:37.050765   58678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.050988   58678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.051005   58678 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.051146   58678 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.051199   58678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.051323   58678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.051396   58678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.051560   58678 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.052826   58678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.052840   58678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.052853   58678 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052910   58678 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.052742   58678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.052744   58678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.237714   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.238720   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.246932   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0708 20:56:37.253938   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.256152   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.264291   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.304685   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.316620   58678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0708 20:56:37.316664   58678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.316710   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.352464   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.387003   58678 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0708 20:56:37.387039   58678 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.387078   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513840   58678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0708 20:56:37.513886   58678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.513925   58678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0708 20:56:37.513938   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513958   58678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.513987   58678 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0708 20:56:37.514000   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514016   58678 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.514054   58678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0708 20:56:37.514097   58678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.514061   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514136   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514138   58678 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0708 20:56:37.514078   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.514159   58678 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.514191   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514224   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.535635   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.535678   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.596995   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0708 20:56:37.597092   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.597102   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.651190   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0708 20:56:37.651320   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:37.695843   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0708 20:56:37.695944   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0708 20:56:37.695995   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0708 20:56:37.696018   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:37.696020   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.696052   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:37.695849   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0708 20:56:37.696071   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.695875   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0708 20:56:37.696117   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:37.696211   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:37.721410   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0708 20:56:37.721453   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0708 20:56:37.721536   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0708 20:56:37.721644   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:39.890974   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.19489331s)
	I0708 20:56:39.891017   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0708 20:56:39.891070   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (2.194976871s)
	I0708 20:56:39.891096   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0708 20:56:39.891107   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.194875907s)
	I0708 20:56:39.891117   58678 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891120   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0708 20:56:39.891156   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2: (2.194966409s)
	I0708 20:56:39.891175   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891184   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0708 20:56:39.891196   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.169535432s)
	I0708 20:56:39.891212   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0708 20:56:37.824606   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.824634   59655 pod_ready.go:81] duration metric: took 1.510214968s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.824646   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829963   59655 pod_ready.go:92] pod "kube-proxy-vq4l8" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.829988   59655 pod_ready.go:81] duration metric: took 5.334688ms for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829997   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338575   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:38.338611   59655 pod_ready.go:81] duration metric: took 508.60515ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338625   59655 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:40.346498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.939773   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:43.941838   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.962256   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.071056184s)
	I0708 20:56:41.962281   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0708 20:56:41.962304   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:41.962349   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:44.325730   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (2.363358371s)
	I0708 20:56:44.325760   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0708 20:56:44.325789   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:44.325853   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:42.845177   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:44.846215   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.441086   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:48.939348   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.588882   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.263001s)
	I0708 20:56:46.588909   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0708 20:56:46.588931   58678 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:46.588980   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:50.590689   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.001689035s)
	I0708 20:56:50.590724   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0708 20:56:50.590758   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:50.590813   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:47.345179   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:49.346736   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:51.846003   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:50.940095   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:53.441346   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:52.446198   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (1.855362154s)
	I0708 20:56:52.446229   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0708 20:56:52.446247   58678 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:52.446284   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:53.400379   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0708 20:56:53.400419   58678 cache_images.go:123] Successfully loaded all cached images
	I0708 20:56:53.400424   58678 cache_images.go:92] duration metric: took 16.349697925s to LoadCachedImages
	I0708 20:56:53.400436   58678 kubeadm.go:928] updating node { 192.168.39.108 8443 v1.30.2 crio true true} ...
	I0708 20:56:53.400599   58678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-028021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:53.400692   58678 ssh_runner.go:195] Run: crio config
	I0708 20:56:53.452091   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:56:53.452117   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:53.452131   58678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:53.452150   58678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.108 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-028021 NodeName:no-preload-028021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:53.452285   58678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-028021"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:53.452344   58678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:53.464447   58678 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:53.464522   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:53.474930   58678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0708 20:56:53.493701   58678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:53.511491   58678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0708 20:56:53.530848   58678 ssh_runner.go:195] Run: grep 192.168.39.108	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:53.534931   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:53.547590   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:53.658960   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:53.677127   58678 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021 for IP: 192.168.39.108
	I0708 20:56:53.677151   58678 certs.go:194] generating shared ca certs ...
	I0708 20:56:53.677169   58678 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:53.677296   58678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:53.677330   58678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:53.677338   58678 certs.go:256] generating profile certs ...
	I0708 20:56:53.677420   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.key
	I0708 20:56:53.677471   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key.c3084b2b
	I0708 20:56:53.677511   58678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key
	I0708 20:56:53.677613   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:53.677639   58678 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:53.677645   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:53.677677   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:53.677752   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:53.677785   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:53.677825   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:53.680483   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:53.739386   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:53.770850   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:53.813958   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:53.850256   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 20:56:53.891539   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:53.921136   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:53.948966   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:53.977129   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:54.002324   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:54.028222   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:54.054099   58678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:54.073386   58678 ssh_runner.go:195] Run: openssl version
	I0708 20:56:54.079883   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:54.092980   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097451   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097503   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.103507   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:54.115123   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:54.126757   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131534   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131579   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.137333   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:54.148368   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:54.159628   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164230   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164280   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.170068   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:54.182152   58678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:54.187146   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:54.193425   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:54.200491   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:54.207006   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:54.213285   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:54.220313   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:54.227497   58678 kubeadm.go:391] StartCluster: {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:54.227597   58678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:54.227657   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.273025   58678 cri.go:89] found id: ""
	I0708 20:56:54.273094   58678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:54.284942   58678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:54.284965   58678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:54.284972   58678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:54.285023   58678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:54.296666   58678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:54.297740   58678 kubeconfig.go:125] found "no-preload-028021" server: "https://192.168.39.108:8443"
	I0708 20:56:54.299928   58678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:54.310186   58678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.108
	I0708 20:56:54.310224   58678 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:54.310235   58678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:54.310290   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.351640   58678 cri.go:89] found id: ""
	I0708 20:56:54.351709   58678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:54.370292   58678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:54.380551   58678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:54.380571   58678 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:54.380611   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:54.391462   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:54.391525   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:54.401804   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:54.411423   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:54.411501   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:54.422126   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.432236   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:54.432299   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.443001   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:54.454210   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:54.454271   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:54.465426   58678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:54.477714   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:54.593844   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.651092   58678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057214047s)
	I0708 20:56:55.651120   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.862342   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.952093   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:56.070164   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:56.070232   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:53.846869   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.847242   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.941645   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:58.440406   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:56.570644   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.071067   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.099879   58678 api_server.go:72] duration metric: took 1.02971362s to wait for apiserver process to appear ...
	I0708 20:56:57.099907   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:57.099932   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.102677   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.102805   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.102854   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.143035   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.143069   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.600694   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.605315   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:00.605349   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:01.100628   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.106209   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.106235   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:58.345619   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:00.346515   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:01.600656   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.605348   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.605381   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.101023   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.105457   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.105490   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.600058   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.604370   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.604397   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.100641   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.105655   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:03.105685   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.600193   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.604714   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 20:57:03.617761   58678 api_server.go:141] control plane version: v1.30.2
	I0708 20:57:03.617795   58678 api_server.go:131] duration metric: took 6.517881236s to wait for apiserver health ...
	I0708 20:57:03.617805   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:57:03.617811   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:57:03.619739   58678 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:57:00.940450   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.448484   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.621363   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:57:03.635846   58678 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:57:03.667045   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:57:03.686236   58678 system_pods.go:59] 8 kube-system pods found
	I0708 20:57:03.686308   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:57:03.686322   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:57:03.686334   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:57:03.686348   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:57:03.686354   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 20:57:03.686363   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:57:03.686371   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:57:03.686379   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 20:57:03.686390   58678 system_pods.go:74] duration metric: took 19.320099ms to wait for pod list to return data ...
	I0708 20:57:03.686402   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:57:03.696401   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:57:03.696436   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 20:57:03.696449   58678 node_conditions.go:105] duration metric: took 10.038061ms to run NodePressure ...
	I0708 20:57:03.696474   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:57:03.981698   58678 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987357   58678 kubeadm.go:733] kubelet initialised
	I0708 20:57:03.987379   58678 kubeadm.go:734] duration metric: took 5.653044ms waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987387   58678 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:03.993341   58678 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:03.999133   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999165   58678 pod_ready.go:81] duration metric: took 5.798521ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:03.999177   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999188   58678 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.004640   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004666   58678 pod_ready.go:81] duration metric: took 5.471219ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.004676   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004685   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.011313   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011342   58678 pod_ready.go:81] duration metric: took 6.65044ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.011354   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011364   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.071038   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071092   58678 pod_ready.go:81] duration metric: took 59.716762ms for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.071105   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071114   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.470702   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470732   58678 pod_ready.go:81] duration metric: took 399.6044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.470743   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470753   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.871002   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871036   58678 pod_ready.go:81] duration metric: took 400.275337ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.871045   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871052   58678 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:05.270858   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270883   58678 pod_ready.go:81] duration metric: took 399.822389ms for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:05.270892   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270899   58678 pod_ready.go:38] duration metric: took 1.283504929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:05.270914   58678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 20:57:05.284879   58678 ops.go:34] apiserver oom_adj: -16
	I0708 20:57:05.284900   58678 kubeadm.go:591] duration metric: took 10.999921787s to restartPrimaryControlPlane
	I0708 20:57:05.284912   58678 kubeadm.go:393] duration metric: took 11.057424996s to StartCluster
	I0708 20:57:05.284931   58678 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.285024   58678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:57:05.287297   58678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.287607   58678 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 20:57:05.287708   58678 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 20:57:05.287790   58678 addons.go:69] Setting storage-provisioner=true in profile "no-preload-028021"
	I0708 20:57:05.287807   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:57:05.287809   58678 addons.go:69] Setting default-storageclass=true in profile "no-preload-028021"
	I0708 20:57:05.287845   58678 addons.go:69] Setting metrics-server=true in profile "no-preload-028021"
	I0708 20:57:05.287900   58678 addons.go:234] Setting addon metrics-server=true in "no-preload-028021"
	W0708 20:57:05.287912   58678 addons.go:243] addon metrics-server should already be in state true
	I0708 20:57:05.287946   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.287854   58678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-028021"
	I0708 20:57:05.287825   58678 addons.go:234] Setting addon storage-provisioner=true in "no-preload-028021"
	W0708 20:57:05.288007   58678 addons.go:243] addon storage-provisioner should already be in state true
	I0708 20:57:05.288040   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.288276   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288308   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288380   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288382   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288430   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288413   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.289690   58678 out.go:177] * Verifying Kubernetes components...
	I0708 20:57:05.291336   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:57:05.310203   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0708 20:57:05.310610   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.311107   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.311129   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.311527   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.311990   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.312026   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.332966   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0708 20:57:05.332984   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0708 20:57:05.333056   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I0708 20:57:05.333449   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333466   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333497   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333994   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334014   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334138   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334146   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334158   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334163   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334347   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334514   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.334640   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334683   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334822   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.335285   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.335304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.337444   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.338763   58678 addons.go:234] Setting addon default-storageclass=true in "no-preload-028021"
	W0708 20:57:05.338785   58678 addons.go:243] addon default-storageclass should already be in state true
	I0708 20:57:05.338814   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.339217   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.339304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.339800   58678 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 20:57:05.341280   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 20:57:05.341303   58678 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 20:57:05.341327   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.344838   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345488   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.345504   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345683   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.345892   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.346146   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.346326   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.359060   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0708 20:57:05.359692   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.360186   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.360207   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.360545   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.361128   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.361173   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.361352   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0708 20:57:05.361971   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.362509   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.362525   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.362911   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.363148   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.364747   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.366914   58678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:57:05.368450   58678 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.368467   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 20:57:05.368483   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.372067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372368   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.372387   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372767   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.373030   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.373235   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.373389   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.379255   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0708 20:57:05.379732   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.380405   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.380428   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.380832   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.381039   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.382973   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.383191   58678 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.383211   58678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 20:57:05.383231   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.386273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386682   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.386705   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386997   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.387184   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.387336   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.387497   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.506081   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:57:05.525373   58678 node_ready.go:35] waiting up to 6m0s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:05.594638   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 20:57:05.594665   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 20:57:05.615378   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.620306   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 20:57:05.620331   58678 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 20:57:05.639840   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.692078   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:05.692109   58678 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 20:57:05.756364   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:06.822244   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206830336s)
	I0708 20:57:06.822310   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18243745s)
	I0708 20:57:06.822323   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822385   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065981271s)
	I0708 20:57:06.822418   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822432   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822390   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822351   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822504   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822850   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822870   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822879   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822886   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822955   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.822971   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822976   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822993   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822995   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823009   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823020   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823010   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823051   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823154   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823164   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823366   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.823380   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823390   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825436   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.825455   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825465   58678 addons.go:475] Verifying addon metrics-server=true in "no-preload-028021"
	I0708 20:57:06.830088   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.830108   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.830406   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.830420   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.830423   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.832322   58678 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0708 20:57:02.845629   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.353827   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.940469   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:08.439911   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:06.833974   58678 addons.go:510] duration metric: took 1.546270475s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0708 20:57:07.529328   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:09.529406   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:11.030134   58678 node_ready.go:49] node "no-preload-028021" has status "Ready":"True"
	I0708 20:57:11.030162   58678 node_ready.go:38] duration metric: took 5.504751555s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:11.030174   58678 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:11.035309   58678 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039750   58678 pod_ready.go:92] pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.039772   58678 pod_ready.go:81] duration metric: took 4.436756ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039783   58678 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044726   58678 pod_ready.go:92] pod "etcd-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.044748   58678 pod_ready.go:81] duration metric: took 4.958058ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044756   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049083   58678 pod_ready.go:92] pod "kube-apiserver-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.049104   58678 pod_ready.go:81] duration metric: took 4.34014ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049115   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:07.846290   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.344964   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.939618   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.445191   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.056307   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.056817   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.063838   58678 pod_ready.go:92] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.063864   58678 pod_ready.go:81] duration metric: took 5.014740635s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.063875   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082486   58678 pod_ready.go:92] pod "kube-proxy-6p6l6" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.082529   58678 pod_ready.go:81] duration metric: took 18.642044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082545   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092312   58678 pod_ready.go:92] pod "kube-scheduler-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.092337   58678 pod_ready.go:81] duration metric: took 9.783638ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092347   58678 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.353120   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:57:16.353203   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:57:16.355269   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:57:16.355317   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:57:16.355404   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:57:16.355558   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:57:16.355708   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:57:16.355815   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:57:16.358151   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:57:16.358312   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:57:16.358411   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:57:16.358531   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:57:16.358641   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:57:16.358732   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:57:16.358798   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:57:16.358893   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:57:16.359004   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:57:16.359128   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:57:16.359209   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:57:16.359288   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:57:16.359384   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:57:16.359509   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:57:16.359614   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:57:16.359725   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:57:16.359794   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:57:16.359881   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:57:16.359963   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:57:16.360002   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:57:16.360099   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:57:16.361960   57466 out.go:204]   - Booting up control plane ...
	I0708 20:57:16.362053   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:57:16.362196   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:57:16.362283   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:57:16.362402   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:57:16.362589   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:57:16.362819   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:57:16.362930   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363170   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363242   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363473   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363580   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363786   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363873   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364093   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364247   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364435   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364445   57466 kubeadm.go:309] 
	I0708 20:57:16.364476   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:57:16.364533   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:57:16.364541   57466 kubeadm.go:309] 
	I0708 20:57:16.364601   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:57:16.364636   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:57:16.364796   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:57:16.364820   57466 kubeadm.go:309] 
	I0708 20:57:16.364958   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:57:16.365016   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:57:16.365057   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:57:16.365063   57466 kubeadm.go:309] 
	I0708 20:57:16.365208   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:57:16.365339   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:57:16.365356   57466 kubeadm.go:309] 
	I0708 20:57:16.365490   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:57:16.365589   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:57:16.365694   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:57:16.365869   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:57:16.365969   57466 kubeadm.go:309] 
	I0708 20:57:16.365972   57466 kubeadm.go:393] duration metric: took 7m56.670441698s to StartCluster
	I0708 20:57:16.366023   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:57:16.366090   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:57:16.435868   57466 cri.go:89] found id: ""
	I0708 20:57:16.435896   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.435904   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:57:16.435910   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:57:16.435969   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:57:16.478844   57466 cri.go:89] found id: ""
	I0708 20:57:16.478881   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.478896   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:57:16.478904   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:57:16.478974   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:57:16.517414   57466 cri.go:89] found id: ""
	I0708 20:57:16.517439   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.517448   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:57:16.517455   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:57:16.517516   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:57:16.557036   57466 cri.go:89] found id: ""
	I0708 20:57:16.557063   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.557074   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:57:16.557081   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:57:16.557153   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:57:16.593604   57466 cri.go:89] found id: ""
	I0708 20:57:16.593631   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.593641   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:57:16.593648   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:57:16.593704   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:57:16.634143   57466 cri.go:89] found id: ""
	I0708 20:57:16.634173   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.634183   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:57:16.634190   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:57:16.634248   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:57:16.676553   57466 cri.go:89] found id: ""
	I0708 20:57:16.676585   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.676595   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:57:16.676602   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:57:16.676663   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:57:16.715652   57466 cri.go:89] found id: ""
	I0708 20:57:16.715674   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.715682   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:57:16.715692   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:57:16.715703   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:57:16.730747   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:57:16.730776   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:57:16.814950   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:57:16.814976   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:57:16.815005   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:57:16.921144   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:57:16.921194   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:57:16.973261   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:57:16.973294   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 20:57:17.031242   57466 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0708 20:57:17.031307   57466 out.go:239] * 
	W0708 20:57:17.031362   57466 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.031389   57466 out.go:239] * 
	W0708 20:57:17.032214   57466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:57:17.035847   57466 out.go:177] 
	W0708 20:57:17.037198   57466 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.037247   57466 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0708 20:57:17.037274   57466 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0708 20:57:17.039077   57466 out.go:177] 
	I0708 20:57:12.345241   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:14.346235   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.347467   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.940334   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:17.943302   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:18.102691   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:20.599066   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:18.847908   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:21.345112   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:20.441347   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:22.939786   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:24.940449   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:22.600192   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:25.100175   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:23.346438   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:25.845181   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.439923   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:29.940540   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.600010   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:30.099104   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.845456   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:29.845526   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.440285   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.939729   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.101616   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.598135   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.345268   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.844782   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.845440   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.940110   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:38.940964   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.600034   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:39.099711   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.100745   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:38.847223   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.344382   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.441047   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.939510   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.599982   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:46.101913   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.345029   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:45.345390   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:45.939787   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:47.940956   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:49.941949   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:48.598871   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:50.600154   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:47.346271   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:49.346661   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:51.844897   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:52.439646   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:54.440569   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:52.604096   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:55.103841   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:54.345832   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:56.845398   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:56.440640   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:58.939537   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:57.598505   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:00.098797   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:58.848087   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:01.346566   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:00.940434   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:03.439927   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:02.602188   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:05.100284   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:03.848841   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:06.346912   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:05.441676   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:07.942369   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:07.599099   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:09.601188   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:08.848926   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:11.346458   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:10.439620   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:12.440274   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:14.939694   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:12.098918   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:14.099419   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:13.844947   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:15.845203   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:16.940812   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:18.941307   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:16.599322   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:19.098815   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:21.100160   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:17.845975   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:20.347071   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:21.439802   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:23.441183   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:23.598459   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:26.098717   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:22.844674   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:24.845210   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:26.848564   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:25.939783   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:28.439490   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:28.099236   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:30.599130   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:29.344306   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:31.345070   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:30.439832   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:32.440229   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:34.441525   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:32.600143   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:35.100068   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:33.345938   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:35.845421   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:36.939642   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:38.941263   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:37.599587   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:40.099121   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:37.845529   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:40.345830   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:41.441175   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:43.941076   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:42.099418   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:44.101452   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:42.844426   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:44.846831   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:45.941732   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:48.440398   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:46.599328   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:48.600055   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:51.099949   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:47.347094   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:49.846223   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:50.940172   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:52.940229   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:54.941034   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:53.100619   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:55.599681   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:52.347726   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:54.845461   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:56.846142   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:56.941957   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.439408   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:57.600406   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.600450   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.344802   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:01.345852   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:01.939259   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:03.940182   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:02.101218   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:04.600651   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:03.845810   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:05.846170   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:05.940757   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:08.439635   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:07.100571   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:09.100718   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:08.344894   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:10.346744   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:10.440413   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:12.440882   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:14.940151   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:11.601260   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:13.603589   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:16.112928   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:12.848135   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:15.346591   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:17.440326   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:19.440421   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:18.598791   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:20.600589   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:17.845413   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:19.849057   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:21.941414   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:24.441214   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:23.100854   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:25.599374   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:22.346925   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:24.845239   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:26.941311   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:28.948332   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:28.100928   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:30.600465   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:27.345835   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:29.846655   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:31.848193   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:31.440572   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:33.939354   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:33.100068   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:35.601159   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:34.345252   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:36.346479   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:35.939843   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:37.941381   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:38.100393   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.102157   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:38.844435   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.845328   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.438849   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:42.441256   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:44.442877   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:42.601119   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:45.101132   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:43.345149   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:45.345522   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:46.940287   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:48.941589   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:47.101717   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:49.598367   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:47.846030   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:49.846247   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:51.438745   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:53.441587   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:51.599309   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:54.105369   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:56.110085   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:52.347026   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:54.845971   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:55.939702   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:57.940731   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:58.598821   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:00.599435   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:57.345043   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:59.346796   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:01.347030   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:00.439467   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:02.443994   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:04.941721   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:02.599994   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:05.098379   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:03.845802   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:05.846016   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:07.439561   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:09.440326   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:07.099339   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:09.599746   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:08.345432   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:10.347888   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:11.940331   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:13.940496   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:12.100751   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:14.597860   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:12.349653   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:14.846452   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:16.440554   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:18.441219   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:19.434076   59107 pod_ready.go:81] duration metric: took 4m0.000896796s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	E0708 21:00:19.434112   59107 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0708 21:00:19.434131   59107 pod_ready.go:38] duration metric: took 4m10.050938227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:00:19.434157   59107 kubeadm.go:591] duration metric: took 4m18.183643708s to restartPrimaryControlPlane
	W0708 21:00:19.434219   59107 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 21:00:19.434258   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:00:16.598896   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:18.598974   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:20.599027   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:17.345157   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:19.345498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:21.346939   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:22.599140   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:24.600455   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:23.347325   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:25.846384   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:27.104536   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:29.598836   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:27.847635   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:30.345065   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:31.600246   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:34.099964   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:32.348256   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:34.846942   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:36.598075   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:38.599175   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:40.599720   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:37.345319   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:38.339580   59655 pod_ready.go:81] duration metric: took 4m0.000925316s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	E0708 21:00:38.339615   59655 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0708 21:00:38.339635   59655 pod_ready.go:38] duration metric: took 4m7.551446129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:00:38.339667   59655 kubeadm.go:591] duration metric: took 4m17.566917749s to restartPrimaryControlPlane
	W0708 21:00:38.339731   59655 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 21:00:38.339763   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:00:43.101768   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:45.102321   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:47.599770   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:50.100703   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:51.419295   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.985013246s)
	I0708 21:00:51.419373   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:00:51.438876   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:00:51.451558   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:00:51.463932   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:00:51.463959   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 21:00:51.464013   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 21:00:51.476729   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:00:51.476791   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:00:51.488357   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 21:00:51.499650   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:00:51.499720   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:00:51.510559   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 21:00:51.522747   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:00:51.522821   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:00:51.534156   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 21:00:51.545057   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:00:51.545123   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:00:51.556712   59107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:00:51.766960   59107 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:00:52.599619   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:55.102565   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:01.185862   59107 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:01:01.185936   59107 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:01:01.186061   59107 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:01:01.186246   59107 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:01:01.186375   59107 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:01:01.186477   59107 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:01:01.188387   59107 out.go:204]   - Generating certificates and keys ...
	I0708 21:01:01.188489   59107 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:01:01.188575   59107 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:01:01.188655   59107 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:01:01.188754   59107 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:01:01.188856   59107 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:01:01.188937   59107 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:01:01.189015   59107 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:01:01.189107   59107 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:01:01.189216   59107 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:01:01.189326   59107 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:01:01.189381   59107 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:01:01.189445   59107 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:01:01.189504   59107 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:01:01.189571   59107 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:01:01.189636   59107 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:01:01.189732   59107 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:01:01.189822   59107 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:01:01.189939   59107 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:01:01.190019   59107 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:01:01.192426   59107 out.go:204]   - Booting up control plane ...
	I0708 21:01:01.192527   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:01:01.192598   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:01:01.192674   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:01:01.192795   59107 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:01:01.192892   59107 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:01:01.192949   59107 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:01:01.193078   59107 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:01:01.193150   59107 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:01:01.193204   59107 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001227366s
	I0708 21:01:01.193274   59107 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:01:01.193329   59107 kubeadm.go:309] [api-check] The API server is healthy after 5.506719576s
	I0708 21:01:01.193428   59107 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 21:01:01.193574   59107 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 21:01:01.193655   59107 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 21:01:01.193854   59107 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-239931 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 21:01:01.193936   59107 kubeadm.go:309] [bootstrap-token] Using token: uu1yg0.6mx8u39sjlxfysca
	I0708 21:01:01.196508   59107 out.go:204]   - Configuring RBAC rules ...
	I0708 21:01:01.196638   59107 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 21:01:01.196748   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 21:01:01.196867   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 21:01:01.196978   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 21:01:01.197141   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 21:01:01.197217   59107 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 21:01:01.197316   59107 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 21:01:01.197355   59107 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 21:01:01.197397   59107 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 21:01:01.197403   59107 kubeadm.go:309] 
	I0708 21:01:01.197451   59107 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 21:01:01.197457   59107 kubeadm.go:309] 
	I0708 21:01:01.197542   59107 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 21:01:01.197555   59107 kubeadm.go:309] 
	I0708 21:01:01.197597   59107 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 21:01:01.197673   59107 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 21:01:01.197748   59107 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 21:01:01.197761   59107 kubeadm.go:309] 
	I0708 21:01:01.197850   59107 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 21:01:01.197860   59107 kubeadm.go:309] 
	I0708 21:01:01.197903   59107 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 21:01:01.197912   59107 kubeadm.go:309] 
	I0708 21:01:01.197971   59107 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 21:01:01.198059   59107 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 21:01:01.198155   59107 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 21:01:01.198165   59107 kubeadm.go:309] 
	I0708 21:01:01.198279   59107 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 21:01:01.198389   59107 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 21:01:01.198400   59107 kubeadm.go:309] 
	I0708 21:01:01.198515   59107 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token uu1yg0.6mx8u39sjlxfysca \
	I0708 21:01:01.198663   59107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 21:01:01.198697   59107 kubeadm.go:309] 	--control-plane 
	I0708 21:01:01.198706   59107 kubeadm.go:309] 
	I0708 21:01:01.198821   59107 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 21:01:01.198830   59107 kubeadm.go:309] 
	I0708 21:01:01.198942   59107 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token uu1yg0.6mx8u39sjlxfysca \
	I0708 21:01:01.199078   59107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 21:01:01.199095   59107 cni.go:84] Creating CNI manager for ""
	I0708 21:01:01.199104   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:01:01.201409   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 21:00:57.600428   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:00.101501   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:01.202540   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 21:01:01.214691   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 21:01:01.238039   59107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 21:01:01.238180   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:01.238204   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-239931 minikube.k8s.io/updated_at=2024_07_08T21_01_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=embed-certs-239931 minikube.k8s.io/primary=true
	I0708 21:01:01.255228   59107 ops.go:34] apiserver oom_adj: -16
	I0708 21:01:01.441736   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:01.942570   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.442775   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.941941   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:03.441910   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:03.942762   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:04.442791   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:04.942122   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.600102   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:04.601357   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:05.442031   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:05.942414   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:06.442353   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:06.942075   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:07.442007   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:07.941952   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:08.442578   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:08.942110   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:09.442438   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:09.942436   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:10.666697   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.326909913s)
	I0708 21:01:10.666766   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:10.684044   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:01:10.695291   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:01:10.705771   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:01:10.705790   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 21:01:10.705829   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 21:01:10.717858   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:01:10.717911   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:01:10.728721   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 21:01:10.738917   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:01:10.738985   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:01:10.749795   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 21:01:10.760976   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:01:10.761036   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:01:10.771625   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 21:01:10.781677   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:01:10.781738   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:01:10.791622   59655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:01:10.855152   59655 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:01:10.855246   59655 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:01:11.027005   59655 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:01:11.027132   59655 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:01:11.027245   59655 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:01:11.262898   59655 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:01:07.098267   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:09.099083   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:11.099398   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:11.264777   59655 out.go:204]   - Generating certificates and keys ...
	I0708 21:01:11.264897   59655 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:01:11.265011   59655 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:01:11.265143   59655 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:01:11.265245   59655 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:01:11.265331   59655 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:01:11.265412   59655 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:01:11.265516   59655 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:01:11.265601   59655 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:01:11.265692   59655 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:01:11.265806   59655 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:01:11.265883   59655 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:01:11.265979   59655 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:01:11.307094   59655 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:01:11.410219   59655 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:01:11.840751   59655 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:01:12.163906   59655 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:01:12.260797   59655 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:01:12.261513   59655 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:01:12.264128   59655 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:01:12.266095   59655 out.go:204]   - Booting up control plane ...
	I0708 21:01:12.266212   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:01:12.266301   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:01:12.267540   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:01:12.290823   59655 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:01:12.291578   59655 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:01:12.291693   59655 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:01:10.442308   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:10.942270   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:11.442233   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:11.942533   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:12.442040   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:12.942629   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:13.441853   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:13.565655   59107 kubeadm.go:1107] duration metric: took 12.327535547s to wait for elevateKubeSystemPrivileges
	W0708 21:01:13.565704   59107 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 21:01:13.565714   59107 kubeadm.go:393] duration metric: took 5m12.375759038s to StartCluster
	I0708 21:01:13.565736   59107 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:13.565845   59107 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:01:13.568610   59107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:13.568940   59107 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:01:13.568980   59107 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 21:01:13.569061   59107 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-239931"
	I0708 21:01:13.569098   59107 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-239931"
	W0708 21:01:13.569113   59107 addons.go:243] addon storage-provisioner should already be in state true
	I0708 21:01:13.569136   59107 addons.go:69] Setting metrics-server=true in profile "embed-certs-239931"
	I0708 21:01:13.569098   59107 addons.go:69] Setting default-storageclass=true in profile "embed-certs-239931"
	I0708 21:01:13.569169   59107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-239931"
	I0708 21:01:13.569178   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:01:13.569149   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.569185   59107 addons.go:234] Setting addon metrics-server=true in "embed-certs-239931"
	W0708 21:01:13.569244   59107 addons.go:243] addon metrics-server should already be in state true
	I0708 21:01:13.569274   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.569617   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569639   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569648   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.569671   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.569673   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569698   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.570670   59107 out.go:177] * Verifying Kubernetes components...
	I0708 21:01:13.572338   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:01:13.590692   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40615
	I0708 21:01:13.590708   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0708 21:01:13.590701   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0708 21:01:13.591271   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591375   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591622   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591792   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.591806   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.591888   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.591909   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.592348   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.592368   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.592387   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.592422   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.592655   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.593065   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.593092   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.593568   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.594139   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.594196   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.596834   59107 addons.go:234] Setting addon default-storageclass=true in "embed-certs-239931"
	W0708 21:01:13.596857   59107 addons.go:243] addon default-storageclass should already be in state true
	I0708 21:01:13.596892   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.597258   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.597278   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.615398   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0708 21:01:13.616090   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.617374   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.617395   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.617542   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37809
	I0708 21:01:13.618025   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.618066   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.618450   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.618538   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.618563   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.618953   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.619151   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.621015   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.622114   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I0708 21:01:13.622533   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.623046   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.623071   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.623346   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.623757   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.624750   59107 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 21:01:13.625744   59107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:01:13.626604   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 21:01:13.626626   59107 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 21:01:13.626650   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.627717   59107 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:13.627737   59107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 21:01:13.627756   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.628207   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.628245   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.631548   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.633692   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.633737   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.634732   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.634960   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.635186   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.635262   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.635282   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.635415   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.635581   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.635946   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.636122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.636282   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.636468   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.650948   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0708 21:01:13.651543   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.652143   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.652165   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.652659   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.652835   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.654717   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.654971   59107 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:13.654988   59107 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 21:01:13.655006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.658670   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.659361   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.659475   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.659800   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.660109   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.660275   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.660406   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.813860   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:01:13.832841   59107 node_ready.go:35] waiting up to 6m0s for node "embed-certs-239931" to be "Ready" ...
	I0708 21:01:13.842398   59107 node_ready.go:49] node "embed-certs-239931" has status "Ready":"True"
	I0708 21:01:13.842420   59107 node_ready.go:38] duration metric: took 9.540746ms for node "embed-certs-239931" to be "Ready" ...
	I0708 21:01:13.842430   59107 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:13.853426   59107 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.861421   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.861451   59107 pod_ready.go:81] duration metric: took 7.991733ms for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.861466   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.873198   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.873228   59107 pod_ready.go:81] duration metric: took 11.754017ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.873243   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.882509   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.882560   59107 pod_ready.go:81] duration metric: took 9.307056ms for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.882574   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.890814   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.890843   59107 pod_ready.go:81] duration metric: took 8.26049ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.890854   59107 pod_ready.go:38] duration metric: took 48.414688ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:13.890872   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:13.890934   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:13.913170   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 21:01:13.913199   59107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 21:01:13.936334   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:13.942642   59107 api_server.go:72] duration metric: took 373.624334ms to wait for apiserver process to appear ...
	I0708 21:01:13.942673   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:13.942696   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 21:01:13.947241   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 21:01:13.948330   59107 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:13.948354   59107 api_server.go:131] duration metric: took 5.673644ms to wait for apiserver health ...
	I0708 21:01:13.948364   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:13.968333   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:13.999888   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 21:01:13.999920   59107 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 21:01:14.072446   59107 system_pods.go:59] 5 kube-system pods found
	I0708 21:01:14.072553   59107 system_pods.go:61] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.072575   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.072594   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.072608   59107 system_pods.go:61] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending
	I0708 21:01:14.072621   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.072637   59107 system_pods.go:74] duration metric: took 124.266452ms to wait for pod list to return data ...
	I0708 21:01:14.072663   59107 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:14.111310   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:14.111337   59107 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 21:01:14.196596   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:14.248043   59107 default_sa.go:45] found service account: "default"
	I0708 21:01:14.248075   59107 default_sa.go:55] duration metric: took 175.396297ms for default service account to be created ...
	I0708 21:01:14.248086   59107 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:14.381129   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.381166   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.381490   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.381507   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.381517   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.381525   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.383203   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:14.383213   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.383229   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.430533   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.430558   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.430835   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:14.431498   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.431558   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.440088   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.440129   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.440140   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.440148   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.440156   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.440162   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.440171   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.440176   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.440199   59107 retry.go:31] will retry after 211.74015ms: missing components: kube-dns, kube-proxy
	I0708 21:01:14.660845   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.660901   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.660916   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.660928   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.660938   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.660946   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.660990   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.661002   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.661036   59107 retry.go:31] will retry after 318.627165ms: missing components: kube-dns, kube-proxy
	I0708 21:01:14.988296   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.988336   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.988348   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.988359   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.988369   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.988376   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.988388   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.988398   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.988425   59107 retry.go:31] will retry after 333.622066ms: missing components: kube-dns, kube-proxy
	I0708 21:01:15.024853   59107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.056470802s)
	I0708 21:01:15.024902   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.024914   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.025237   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.025264   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.025266   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.025279   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.025288   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.025550   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.025566   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.348381   59107 system_pods.go:86] 8 kube-system pods found
	I0708 21:01:15.348419   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.348430   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.348440   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:15.348448   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:15.348455   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:15.348464   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:15.348473   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:15.348483   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:15.348502   59107 retry.go:31] will retry after 415.910372ms: missing components: kube-dns, kube-proxy
	I0708 21:01:15.736384   59107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.539741133s)
	I0708 21:01:15.736440   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.736456   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.736743   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.736782   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.736763   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.736803   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.736851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.737097   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.737135   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.737148   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.737157   59107 addons.go:475] Verifying addon metrics-server=true in "embed-certs-239931"
	I0708 21:01:15.739025   59107 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0708 21:01:13.102963   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:15.601580   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:16.101049   58678 pod_ready.go:81] duration metric: took 4m0.00868677s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 21:01:16.101081   58678 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0708 21:01:16.101094   58678 pod_ready.go:38] duration metric: took 4m5.070908601s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:16.101112   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:16.101147   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:16.101210   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:16.175601   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:16.175631   58678 cri.go:89] found id: ""
	I0708 21:01:16.175642   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:16.175703   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.182938   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:16.183013   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:16.261385   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:16.261411   58678 cri.go:89] found id: ""
	I0708 21:01:16.261423   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:16.261483   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.266231   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:16.266310   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:15.741167   59107 addons.go:510] duration metric: took 2.172185316s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0708 21:01:15.890659   59107 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:15.890702   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.890713   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.890723   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:15.890731   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:15.890738   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:15.890745   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Running
	I0708 21:01:15.890751   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:15.890759   59107 system_pods.go:89] "metrics-server-569cc877fc-f2dkn" [1d3c3e8e-356d-40b9-8add-35eec096e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:15.890772   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:15.890790   59107 retry.go:31] will retry after 557.749423ms: missing components: kube-dns
	I0708 21:01:16.457046   59107 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:16.457093   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:16.457105   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:16.457114   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:16.457124   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:16.457131   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:16.457137   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Running
	I0708 21:01:16.457143   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:16.457153   59107 system_pods.go:89] "metrics-server-569cc877fc-f2dkn" [1d3c3e8e-356d-40b9-8add-35eec096e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:16.457173   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:16.457183   59107 system_pods.go:126] duration metric: took 2.209089992s to wait for k8s-apps to be running ...
	I0708 21:01:16.457196   59107 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:16.457251   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:16.474652   59107 system_svc.go:56] duration metric: took 17.443712ms WaitForService to wait for kubelet
	I0708 21:01:16.474691   59107 kubeadm.go:576] duration metric: took 2.905677883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:16.474715   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:16.478431   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:16.478456   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:16.478480   59107 node_conditions.go:105] duration metric: took 3.758433ms to run NodePressure ...
	I0708 21:01:16.478502   59107 start.go:240] waiting for startup goroutines ...
	I0708 21:01:16.478515   59107 start.go:245] waiting for cluster config update ...
	I0708 21:01:16.478529   59107 start.go:254] writing updated cluster config ...
	I0708 21:01:16.478860   59107 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:16.536046   59107 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:16.538131   59107 out.go:177] * Done! kubectl is now configured to use "embed-certs-239931" cluster and "default" namespace by default
	I0708 21:01:12.440116   59655 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:01:12.440237   59655 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:01:13.441567   59655 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001312349s
	I0708 21:01:13.441690   59655 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:01:18.943345   59655 kubeadm.go:309] [api-check] The API server is healthy after 5.501634999s
	I0708 21:01:18.963728   59655 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 21:01:18.980036   59655 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 21:01:19.028362   59655 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 21:01:19.028635   59655 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-071971 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 21:01:19.051700   59655 kubeadm.go:309] [bootstrap-token] Using token: guoi3f.tsy4dvdlokyfqa2b
	I0708 21:01:19.053224   59655 out.go:204]   - Configuring RBAC rules ...
	I0708 21:01:19.053323   59655 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 21:01:19.063058   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 21:01:19.077711   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 21:01:19.090415   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 21:01:19.095539   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 21:01:19.101465   59655 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 21:01:19.351634   59655 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 21:01:19.809053   59655 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 21:01:20.359069   59655 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 21:01:20.359125   59655 kubeadm.go:309] 
	I0708 21:01:20.359193   59655 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 21:01:20.359227   59655 kubeadm.go:309] 
	I0708 21:01:20.359368   59655 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 21:01:20.359379   59655 kubeadm.go:309] 
	I0708 21:01:20.359439   59655 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 21:01:20.359553   59655 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 21:01:20.359613   59655 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 21:01:20.359624   59655 kubeadm.go:309] 
	I0708 21:01:20.359686   59655 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 21:01:20.359694   59655 kubeadm.go:309] 
	I0708 21:01:20.359733   59655 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 21:01:20.359740   59655 kubeadm.go:309] 
	I0708 21:01:20.359787   59655 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 21:01:20.359899   59655 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 21:01:20.359994   59655 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 21:01:20.360003   59655 kubeadm.go:309] 
	I0708 21:01:20.360096   59655 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 21:01:20.360194   59655 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 21:01:20.360202   59655 kubeadm.go:309] 
	I0708 21:01:20.360311   59655 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token guoi3f.tsy4dvdlokyfqa2b \
	I0708 21:01:20.360468   59655 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 21:01:20.360507   59655 kubeadm.go:309] 	--control-plane 
	I0708 21:01:20.360516   59655 kubeadm.go:309] 
	I0708 21:01:20.360628   59655 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 21:01:20.360639   59655 kubeadm.go:309] 
	I0708 21:01:20.360765   59655 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token guoi3f.tsy4dvdlokyfqa2b \
	I0708 21:01:20.360891   59655 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 21:01:20.361857   59655 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:01:20.361894   59655 cni.go:84] Creating CNI manager for ""
	I0708 21:01:20.361910   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:01:20.363579   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 21:01:16.309299   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:16.309328   58678 cri.go:89] found id: ""
	I0708 21:01:16.309337   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:16.309403   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.314236   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:16.314320   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:16.371891   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:16.371919   58678 cri.go:89] found id: ""
	I0708 21:01:16.371937   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:16.372008   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.380409   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:16.380480   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:16.428411   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:16.428441   58678 cri.go:89] found id: ""
	I0708 21:01:16.428452   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:16.428514   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.433310   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:16.433390   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:16.474785   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:16.474807   58678 cri.go:89] found id: ""
	I0708 21:01:16.474816   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:16.474882   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.480849   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:16.480933   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:16.529115   58678 cri.go:89] found id: ""
	I0708 21:01:16.529136   58678 logs.go:276] 0 containers: []
	W0708 21:01:16.529146   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:16.529153   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:16.529222   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:16.576499   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:16.576519   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:16.576527   58678 cri.go:89] found id: ""
	I0708 21:01:16.576536   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:16.576584   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.581261   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.587704   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:16.587733   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:16.651329   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:16.651385   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:16.706341   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:16.706380   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:17.302518   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:17.302570   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:17.373619   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:17.373651   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:17.414687   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:17.414722   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:17.470462   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:17.470499   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:17.487151   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:17.487189   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:17.625611   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:17.625655   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:17.673291   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:17.673325   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:17.712222   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:17.712253   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:17.752635   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:17.752665   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:17.794056   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:17.794085   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:20.341805   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:20.362405   58678 api_server.go:72] duration metric: took 4m15.074761342s to wait for apiserver process to appear ...
	I0708 21:01:20.362430   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:20.362465   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:20.362523   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:20.409947   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:20.409974   58678 cri.go:89] found id: ""
	I0708 21:01:20.409983   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:20.410040   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.414415   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:20.414476   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:20.463162   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:20.463186   58678 cri.go:89] found id: ""
	I0708 21:01:20.463196   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:20.463263   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.468905   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:20.468986   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:20.514265   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:20.514291   58678 cri.go:89] found id: ""
	I0708 21:01:20.514299   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:20.514357   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.519003   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:20.519081   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:20.565097   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:20.565122   58678 cri.go:89] found id: ""
	I0708 21:01:20.565132   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:20.565190   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.569971   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:20.570048   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:20.614435   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:20.614459   58678 cri.go:89] found id: ""
	I0708 21:01:20.614469   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:20.614525   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.619745   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:20.619824   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:20.660213   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:20.660235   58678 cri.go:89] found id: ""
	I0708 21:01:20.660242   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:20.660292   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.664740   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:20.664822   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:20.710279   58678 cri.go:89] found id: ""
	I0708 21:01:20.710300   58678 logs.go:276] 0 containers: []
	W0708 21:01:20.710307   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:20.710312   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:20.710359   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:20.751880   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:20.751906   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:20.751910   58678 cri.go:89] found id: ""
	I0708 21:01:20.751917   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:20.752028   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.756530   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.760679   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:20.760705   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:20.800525   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:20.800556   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:20.845629   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:20.845666   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:20.364837   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 21:01:20.376977   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 21:01:20.400133   59655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 21:01:20.400241   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:20.400291   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-071971 minikube.k8s.io/updated_at=2024_07_08T21_01_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=default-k8s-diff-port-071971 minikube.k8s.io/primary=true
	I0708 21:01:20.597429   59655 ops.go:34] apiserver oom_adj: -16
	I0708 21:01:20.597490   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.098582   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.597812   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:22.097790   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.356988   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:21.357025   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:21.416130   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:21.416160   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:21.431831   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:21.431865   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:21.479568   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:21.479597   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:21.527937   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:21.527970   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:21.569569   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:21.569605   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:21.691646   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:21.691678   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:21.737949   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:21.737975   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:21.789038   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:21.789069   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:21.831677   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:21.831703   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:24.380502   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 21:01:24.385139   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 21:01:24.386116   58678 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:24.386137   58678 api_server.go:131] duration metric: took 4.023699983s to wait for apiserver health ...
	I0708 21:01:24.386146   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:24.386171   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:24.386225   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:24.423786   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:24.423809   58678 cri.go:89] found id: ""
	I0708 21:01:24.423816   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:24.423869   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.428385   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:24.428447   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:24.467186   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:24.467206   58678 cri.go:89] found id: ""
	I0708 21:01:24.467213   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:24.467269   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.472208   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:24.472273   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:24.511157   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:24.511188   58678 cri.go:89] found id: ""
	I0708 21:01:24.511199   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:24.511266   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.516077   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:24.516144   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:24.556095   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:24.556115   58678 cri.go:89] found id: ""
	I0708 21:01:24.556122   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:24.556171   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.560735   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:24.560795   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:24.602473   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:24.602498   58678 cri.go:89] found id: ""
	I0708 21:01:24.602508   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:24.602562   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.608926   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:24.609003   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:24.653230   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:24.653258   58678 cri.go:89] found id: ""
	I0708 21:01:24.653267   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:24.653327   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.657884   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:24.657954   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:24.700775   58678 cri.go:89] found id: ""
	I0708 21:01:24.700800   58678 logs.go:276] 0 containers: []
	W0708 21:01:24.700810   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:24.700817   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:24.700876   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:24.738593   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:24.738619   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:24.738625   58678 cri.go:89] found id: ""
	I0708 21:01:24.738633   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:24.738689   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.743324   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.747684   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:24.747709   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:24.800431   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:24.800467   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:24.910702   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:24.910738   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:24.967323   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:24.967355   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:25.012335   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:25.012367   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:25.393024   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:25.393064   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:25.449280   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:25.449315   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:25.488676   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:25.488703   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:25.503705   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:25.503734   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:25.551111   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:25.551155   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:25.598388   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:25.598425   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:25.642052   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:25.642087   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:25.680632   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:25.680665   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:22.597628   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:23.098128   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:23.597756   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:24.097555   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:24.598149   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:25.098149   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:25.598255   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:26.097514   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:26.598211   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:27.097610   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.229251   58678 system_pods.go:59] 8 kube-system pods found
	I0708 21:01:28.229286   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running
	I0708 21:01:28.229293   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running
	I0708 21:01:28.229298   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running
	I0708 21:01:28.229304   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running
	I0708 21:01:28.229308   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 21:01:28.229312   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running
	I0708 21:01:28.229321   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:28.229327   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 21:01:28.229337   58678 system_pods.go:74] duration metric: took 3.843183956s to wait for pod list to return data ...
	I0708 21:01:28.229347   58678 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:28.232297   58678 default_sa.go:45] found service account: "default"
	I0708 21:01:28.232323   58678 default_sa.go:55] duration metric: took 2.96709ms for default service account to be created ...
	I0708 21:01:28.232333   58678 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:28.240720   58678 system_pods.go:86] 8 kube-system pods found
	I0708 21:01:28.240750   58678 system_pods.go:89] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running
	I0708 21:01:28.240755   58678 system_pods.go:89] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running
	I0708 21:01:28.240760   58678 system_pods.go:89] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running
	I0708 21:01:28.240765   58678 system_pods.go:89] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running
	I0708 21:01:28.240770   58678 system_pods.go:89] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 21:01:28.240774   58678 system_pods.go:89] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running
	I0708 21:01:28.240781   58678 system_pods.go:89] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:28.240787   58678 system_pods.go:89] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 21:01:28.240794   58678 system_pods.go:126] duration metric: took 8.454141ms to wait for k8s-apps to be running ...
	I0708 21:01:28.240804   58678 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:28.240855   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:28.256600   58678 system_svc.go:56] duration metric: took 15.789082ms WaitForService to wait for kubelet
	I0708 21:01:28.256630   58678 kubeadm.go:576] duration metric: took 4m22.968988646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:28.256654   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:28.260384   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:28.260402   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:28.260412   58678 node_conditions.go:105] duration metric: took 3.753004ms to run NodePressure ...
	I0708 21:01:28.260422   58678 start.go:240] waiting for startup goroutines ...
	I0708 21:01:28.260429   58678 start.go:245] waiting for cluster config update ...
	I0708 21:01:28.260438   58678 start.go:254] writing updated cluster config ...
	I0708 21:01:28.260686   58678 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:28.311517   58678 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:28.313560   58678 out.go:177] * Done! kubectl is now configured to use "no-preload-028021" cluster and "default" namespace by default
	I0708 21:01:27.598457   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.098475   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.598380   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:29.097496   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:29.598229   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:30.097844   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:30.598323   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:31.097781   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:31.598085   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:32.098438   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:32.598450   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.098414   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.597823   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.688717   59655 kubeadm.go:1107] duration metric: took 13.288534329s to wait for elevateKubeSystemPrivileges
	W0708 21:01:33.688756   59655 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 21:01:33.688765   59655 kubeadm.go:393] duration metric: took 5m12.976251287s to StartCluster
	I0708 21:01:33.688782   59655 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:33.688874   59655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:01:33.690446   59655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:33.690691   59655 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:01:33.690814   59655 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 21:01:33.690875   59655 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071971"
	I0708 21:01:33.690893   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:01:33.690907   59655 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071971"
	I0708 21:01:33.690902   59655 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071971"
	W0708 21:01:33.690915   59655 addons.go:243] addon storage-provisioner should already be in state true
	I0708 21:01:33.690914   59655 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071971"
	I0708 21:01:33.690939   59655 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071971"
	I0708 21:01:33.690945   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.690957   59655 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071971"
	W0708 21:01:33.690968   59655 addons.go:243] addon metrics-server should already be in state true
	I0708 21:01:33.691002   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.691272   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691274   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691294   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.691299   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.691323   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691361   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.692506   59655 out.go:177] * Verifying Kubernetes components...
	I0708 21:01:33.694134   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:01:33.708343   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0708 21:01:33.708681   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0708 21:01:33.708849   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.709011   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.709402   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.709421   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.709559   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.709578   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.709795   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.709864   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.710365   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.710411   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.710417   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.710445   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.710809   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0708 21:01:33.711278   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.711858   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.711892   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.712294   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.712604   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.716565   59655 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071971"
	W0708 21:01:33.716590   59655 addons.go:243] addon default-storageclass should already be in state true
	I0708 21:01:33.716620   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.716990   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.717041   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.728113   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0708 21:01:33.728257   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0708 21:01:33.728694   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.728742   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.729182   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.729211   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.729331   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.729353   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.729605   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.729663   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.729781   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.729846   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.731832   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.731878   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.734021   59655 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:01:33.734026   59655 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 21:01:33.736062   59655 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:33.736094   59655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 21:01:33.736122   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.736174   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 21:01:33.736192   59655 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 21:01:33.736222   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.736793   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0708 21:01:33.737419   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.739820   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.739837   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.740075   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740272   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.740463   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.740484   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.740967   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.741060   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.741213   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.741225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.741279   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.741309   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.741438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.741596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.741587   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.741730   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.741820   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.758223   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0708 21:01:33.758739   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.759237   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.759254   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.759633   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.759909   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.761455   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.761644   59655 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:33.761656   59655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 21:01:33.761669   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.764245   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.764541   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.764563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.764701   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.764872   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.765022   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.765126   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.926862   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:01:33.980155   59655 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071971" to be "Ready" ...
	I0708 21:01:33.993505   59655 node_ready.go:49] node "default-k8s-diff-port-071971" has status "Ready":"True"
	I0708 21:01:33.993526   59655 node_ready.go:38] duration metric: took 13.344616ms for node "default-k8s-diff-port-071971" to be "Ready" ...
	I0708 21:01:33.993534   59655 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:34.001402   59655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:34.045900   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:34.058039   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 21:01:34.058059   59655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 21:01:34.102931   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:34.121513   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 21:01:34.121541   59655 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 21:01:34.190181   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:34.190208   59655 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 21:01:34.232200   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:35.071867   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.071888   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.071977   59655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.026035336s)
	I0708 21:01:35.072026   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.072044   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.072157   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.072192   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.072205   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.072212   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.073887   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.073887   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.073917   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.073989   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.074003   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.074013   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.073907   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.074111   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.074438   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.074461   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.146813   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.146840   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.147181   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.147201   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.337952   59655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.105709862s)
	I0708 21:01:35.338010   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.338023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.338415   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.338447   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.338461   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.338471   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.338484   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.338733   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.338751   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.338763   59655 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-071971"
	I0708 21:01:35.340678   59655 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0708 21:01:35.341902   59655 addons.go:510] duration metric: took 1.651084154s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0708 21:01:36.011439   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:37.008538   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.008567   59655 pod_ready.go:81] duration metric: took 3.0071384s for pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.008582   59655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.013291   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.013313   59655 pod_ready.go:81] duration metric: took 4.723566ms for pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.013326   59655 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.017974   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.017997   59655 pod_ready.go:81] duration metric: took 4.66297ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.018009   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.022526   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.022550   59655 pod_ready.go:81] duration metric: took 4.533312ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.022563   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.027009   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.027032   59655 pod_ready.go:81] duration metric: took 4.462202ms for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.027042   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l2mdd" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.406030   59655 pod_ready.go:92] pod "kube-proxy-l2mdd" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.406055   59655 pod_ready.go:81] duration metric: took 379.00677ms for pod "kube-proxy-l2mdd" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.406064   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.806120   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.806141   59655 pod_ready.go:81] duration metric: took 400.070718ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.806151   59655 pod_ready.go:38] duration metric: took 3.812606006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:37.806165   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:37.806214   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:37.822846   59655 api_server.go:72] duration metric: took 4.132126389s to wait for apiserver process to appear ...
	I0708 21:01:37.822872   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:37.822889   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 21:01:37.827017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 21:01:37.827906   59655 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:37.827930   59655 api_server.go:131] duration metric: took 5.051704ms to wait for apiserver health ...
	I0708 21:01:37.827938   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:38.010909   59655 system_pods.go:59] 9 kube-system pods found
	I0708 21:01:38.010937   59655 system_pods.go:61] "coredns-7db6d8ff4d-8msvk" [38c1e0eb-5eb4-4acb-a5ae-c72871884e3d] Running
	I0708 21:01:38.010942   59655 system_pods.go:61] "coredns-7db6d8ff4d-hq7zj" [ddb0f99d-a91d-4bb7-96e7-695b6101a601] Running
	I0708 21:01:38.010946   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [e3399214-404c-423e-9648-b4d920028a92] Running
	I0708 21:01:38.010949   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [7b726b49-c243-4126-b6d2-fc12abc9a042] Running
	I0708 21:01:38.010953   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [6a731125-daa4-4da1-b9e0-1206da592fde] Running
	I0708 21:01:38.010956   59655 system_pods.go:61] "kube-proxy-l2mdd" [b1d70ae2-ed86-49bd-8910-a12c5cd8091a] Running
	I0708 21:01:38.010959   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [dc238033-038e-49ec-ba48-392b0ec2f7bd] Running
	I0708 21:01:38.010965   59655 system_pods.go:61] "metrics-server-569cc877fc-k8vhl" [09f957f3-d76f-4f21-b9a6-e5b249d07e1e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:38.010970   59655 system_pods.go:61] "storage-provisioner" [805a8fdb-ed9e-4f80-a2c9-7d8a0155b228] Running
	I0708 21:01:38.010979   59655 system_pods.go:74] duration metric: took 183.034922ms to wait for pod list to return data ...
	I0708 21:01:38.010987   59655 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:38.205307   59655 default_sa.go:45] found service account: "default"
	I0708 21:01:38.205331   59655 default_sa.go:55] duration metric: took 194.338319ms for default service account to be created ...
	I0708 21:01:38.205340   59655 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:38.410958   59655 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:38.410988   59655 system_pods.go:89] "coredns-7db6d8ff4d-8msvk" [38c1e0eb-5eb4-4acb-a5ae-c72871884e3d] Running
	I0708 21:01:38.410995   59655 system_pods.go:89] "coredns-7db6d8ff4d-hq7zj" [ddb0f99d-a91d-4bb7-96e7-695b6101a601] Running
	I0708 21:01:38.411000   59655 system_pods.go:89] "etcd-default-k8s-diff-port-071971" [e3399214-404c-423e-9648-b4d920028a92] Running
	I0708 21:01:38.411005   59655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071971" [7b726b49-c243-4126-b6d2-fc12abc9a042] Running
	I0708 21:01:38.411009   59655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071971" [6a731125-daa4-4da1-b9e0-1206da592fde] Running
	I0708 21:01:38.411013   59655 system_pods.go:89] "kube-proxy-l2mdd" [b1d70ae2-ed86-49bd-8910-a12c5cd8091a] Running
	I0708 21:01:38.411017   59655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071971" [dc238033-038e-49ec-ba48-392b0ec2f7bd] Running
	I0708 21:01:38.411024   59655 system_pods.go:89] "metrics-server-569cc877fc-k8vhl" [09f957f3-d76f-4f21-b9a6-e5b249d07e1e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:38.411029   59655 system_pods.go:89] "storage-provisioner" [805a8fdb-ed9e-4f80-a2c9-7d8a0155b228] Running
	I0708 21:01:38.411040   59655 system_pods.go:126] duration metric: took 205.695019ms to wait for k8s-apps to be running ...
	I0708 21:01:38.411050   59655 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:38.411092   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:38.428218   59655 system_svc.go:56] duration metric: took 17.158999ms WaitForService to wait for kubelet
	I0708 21:01:38.428248   59655 kubeadm.go:576] duration metric: took 4.737530934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:38.428270   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:38.606369   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:38.606394   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:38.606404   59655 node_conditions.go:105] duration metric: took 178.130401ms to run NodePressure ...
	I0708 21:01:38.606415   59655 start.go:240] waiting for startup goroutines ...
	I0708 21:01:38.606423   59655 start.go:245] waiting for cluster config update ...
	I0708 21:01:38.606432   59655 start.go:254] writing updated cluster config ...
	I0708 21:01:38.606686   59655 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:38.657280   59655 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:38.659556   59655 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-071971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.046728307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473018046704568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06185ada-9fba-45de-adda-5f953609e6aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.047530023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ba22378-dd4b-4c19-bb17-e54538c65749 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.047579686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ba22378-dd4b-4c19-bb17-e54538c65749 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.047831693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce0f4fb108aad8b7e4d5f290e6c38ba959eaff10eb996db4ead860b3da656ffe,PodSandboxId:ffe9c0f59fe34ac7cb5f8a5eba4ecf639cc36b1ef8f9e207e5cfadefae60ca76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472476002732002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe38aa1-fac7-4517-9b33-76f04d2a2f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 56f73b55,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9908f4f99d81652e5638627904e4a861913b81b85f94b5530d7b3eb98fc2c22d,PodSandboxId:a5db9a7e39014ba86f9ff76f744cafba01f3b73c4d3ecc827a95ebe36cd3339d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475436591795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e42c3f-d8a8-4907-b08d-ada6919b55c9,},Annotations:map[string]string{io.kubernetes.container.hash: dc8d0052,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147522b6da453cc658fcf803ab092f1f01ec6299c39beb49ed8aea8fb39183f2,PodSandboxId:2eba620a756036dea40572b4991f9d2e2fecc452569c6a7411509043777e0cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475313095981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l9xmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2723e6e-5bce-43ed-abdb-63120212456f,},Annotations:map[string]string{io.kubernetes.container.hash: d9faa6cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97ff111abfe87fd7f3ae2693205979802fb796c7a252ac101182b0b9045d31f,PodSandboxId:2783ac8e694caf272447f415c358283082e3dcc84c1b1f96c7ab834304944aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1720472474690386153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkvf6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f5061c-fd24-42eb-97b4-e5ec5f57c325,},Annotations:map[string]string{io.kubernetes.container.hash: 85f22e48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd8b4dd934547918e6dd0265b5ab59c0c042fe802122b6dde6fb56c7525b3086,PodSandboxId:d6cdd9e57c5921ad5bdedfa19b2b18a1d993896cde4cf367957b7b0d90367a51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472454513704806,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5915f06682f25360235a0571bf07fcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1751db812059a2b25558db47e64e54db874fc689eaf21c9b94155e5cc6b8ee,PodSandboxId:a0bcc0d0f828fc731627af5ccec3acfbfea977382862bd79796061b5ee3f381e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472454505998034,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017464a8eb9372d81943b1e895114a89,},Annotations:map[string]string{io.kubernetes.container.hash: 9bc29772,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3f064e707e3b8a1df2cecb502630c714a064fa2de639369fd830edb62267c4,PodSandboxId:ac1107c2f5394188a8e9f5bd7236c7285780827027e893ea96e1638362fed98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472454487646172,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182433845698355cb350e0fe26b6032e,},Annotations:map[string]string{io.kubernetes.container.hash: a3816144,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99f8e5897ef06e4cad24cdd6d8f7c18a5b9d5637d7c6312b2816614ae7acb3d,PodSandboxId:cfd2e404c415fef9271b92134f9e0cb1919030310264b87584e9ffd5d9258330,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472454510272686,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a52823041510db1c9cec0ed257a7c73,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ba22378-dd4b-4c19-bb17-e54538c65749 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.086900806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de2e890b-1266-4b85-9dee-64963338e236 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.086976772Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de2e890b-1266-4b85-9dee-64963338e236 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.088358245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b70ea635-2d5d-4c22-b23c-165b6038a90e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.089259148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473018089232389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b70ea635-2d5d-4c22-b23c-165b6038a90e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.089824849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca70630a-b44a-41cf-9f9b-070d20f03cef name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.089890806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca70630a-b44a-41cf-9f9b-070d20f03cef name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.090132452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce0f4fb108aad8b7e4d5f290e6c38ba959eaff10eb996db4ead860b3da656ffe,PodSandboxId:ffe9c0f59fe34ac7cb5f8a5eba4ecf639cc36b1ef8f9e207e5cfadefae60ca76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472476002732002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe38aa1-fac7-4517-9b33-76f04d2a2f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 56f73b55,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9908f4f99d81652e5638627904e4a861913b81b85f94b5530d7b3eb98fc2c22d,PodSandboxId:a5db9a7e39014ba86f9ff76f744cafba01f3b73c4d3ecc827a95ebe36cd3339d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475436591795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e42c3f-d8a8-4907-b08d-ada6919b55c9,},Annotations:map[string]string{io.kubernetes.container.hash: dc8d0052,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147522b6da453cc658fcf803ab092f1f01ec6299c39beb49ed8aea8fb39183f2,PodSandboxId:2eba620a756036dea40572b4991f9d2e2fecc452569c6a7411509043777e0cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475313095981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l9xmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2723e6e-5bce-43ed-abdb-63120212456f,},Annotations:map[string]string{io.kubernetes.container.hash: d9faa6cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97ff111abfe87fd7f3ae2693205979802fb796c7a252ac101182b0b9045d31f,PodSandboxId:2783ac8e694caf272447f415c358283082e3dcc84c1b1f96c7ab834304944aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1720472474690386153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkvf6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f5061c-fd24-42eb-97b4-e5ec5f57c325,},Annotations:map[string]string{io.kubernetes.container.hash: 85f22e48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd8b4dd934547918e6dd0265b5ab59c0c042fe802122b6dde6fb56c7525b3086,PodSandboxId:d6cdd9e57c5921ad5bdedfa19b2b18a1d993896cde4cf367957b7b0d90367a51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472454513704806,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5915f06682f25360235a0571bf07fcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1751db812059a2b25558db47e64e54db874fc689eaf21c9b94155e5cc6b8ee,PodSandboxId:a0bcc0d0f828fc731627af5ccec3acfbfea977382862bd79796061b5ee3f381e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472454505998034,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017464a8eb9372d81943b1e895114a89,},Annotations:map[string]string{io.kubernetes.container.hash: 9bc29772,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3f064e707e3b8a1df2cecb502630c714a064fa2de639369fd830edb62267c4,PodSandboxId:ac1107c2f5394188a8e9f5bd7236c7285780827027e893ea96e1638362fed98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472454487646172,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182433845698355cb350e0fe26b6032e,},Annotations:map[string]string{io.kubernetes.container.hash: a3816144,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99f8e5897ef06e4cad24cdd6d8f7c18a5b9d5637d7c6312b2816614ae7acb3d,PodSandboxId:cfd2e404c415fef9271b92134f9e0cb1919030310264b87584e9ffd5d9258330,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472454510272686,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a52823041510db1c9cec0ed257a7c73,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca70630a-b44a-41cf-9f9b-070d20f03cef name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.130506522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e1a2975-ce4f-412c-8bd6-883f0322620b name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.130938901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e1a2975-ce4f-412c-8bd6-883f0322620b name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.132947052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0ffcb81-220e-41ee-90aa-7fceb907b4f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.133466323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473018133440580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0ffcb81-220e-41ee-90aa-7fceb907b4f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.134335643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6252a05-4e44-49b3-ac08-9ada1988355a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.134447029Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6252a05-4e44-49b3-ac08-9ada1988355a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.135079452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce0f4fb108aad8b7e4d5f290e6c38ba959eaff10eb996db4ead860b3da656ffe,PodSandboxId:ffe9c0f59fe34ac7cb5f8a5eba4ecf639cc36b1ef8f9e207e5cfadefae60ca76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472476002732002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe38aa1-fac7-4517-9b33-76f04d2a2f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 56f73b55,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9908f4f99d81652e5638627904e4a861913b81b85f94b5530d7b3eb98fc2c22d,PodSandboxId:a5db9a7e39014ba86f9ff76f744cafba01f3b73c4d3ecc827a95ebe36cd3339d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475436591795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e42c3f-d8a8-4907-b08d-ada6919b55c9,},Annotations:map[string]string{io.kubernetes.container.hash: dc8d0052,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147522b6da453cc658fcf803ab092f1f01ec6299c39beb49ed8aea8fb39183f2,PodSandboxId:2eba620a756036dea40572b4991f9d2e2fecc452569c6a7411509043777e0cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475313095981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l9xmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2723e6e-5bce-43ed-abdb-63120212456f,},Annotations:map[string]string{io.kubernetes.container.hash: d9faa6cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97ff111abfe87fd7f3ae2693205979802fb796c7a252ac101182b0b9045d31f,PodSandboxId:2783ac8e694caf272447f415c358283082e3dcc84c1b1f96c7ab834304944aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1720472474690386153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkvf6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f5061c-fd24-42eb-97b4-e5ec5f57c325,},Annotations:map[string]string{io.kubernetes.container.hash: 85f22e48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd8b4dd934547918e6dd0265b5ab59c0c042fe802122b6dde6fb56c7525b3086,PodSandboxId:d6cdd9e57c5921ad5bdedfa19b2b18a1d993896cde4cf367957b7b0d90367a51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472454513704806,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5915f06682f25360235a0571bf07fcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1751db812059a2b25558db47e64e54db874fc689eaf21c9b94155e5cc6b8ee,PodSandboxId:a0bcc0d0f828fc731627af5ccec3acfbfea977382862bd79796061b5ee3f381e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472454505998034,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017464a8eb9372d81943b1e895114a89,},Annotations:map[string]string{io.kubernetes.container.hash: 9bc29772,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3f064e707e3b8a1df2cecb502630c714a064fa2de639369fd830edb62267c4,PodSandboxId:ac1107c2f5394188a8e9f5bd7236c7285780827027e893ea96e1638362fed98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472454487646172,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182433845698355cb350e0fe26b6032e,},Annotations:map[string]string{io.kubernetes.container.hash: a3816144,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99f8e5897ef06e4cad24cdd6d8f7c18a5b9d5637d7c6312b2816614ae7acb3d,PodSandboxId:cfd2e404c415fef9271b92134f9e0cb1919030310264b87584e9ffd5d9258330,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472454510272686,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a52823041510db1c9cec0ed257a7c73,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6252a05-4e44-49b3-ac08-9ada1988355a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.177687631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63692fda-f0e7-4354-86c7-30045f939b00 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.177957782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63692fda-f0e7-4354-86c7-30045f939b00 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.179152017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b00603da-c087-4472-aa4b-ada6eecd3014 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.179587819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473018179565894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b00603da-c087-4472-aa4b-ada6eecd3014 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.180295181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=716873ba-ff2d-474c-a4cc-5dabbe1d24fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.180345791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=716873ba-ff2d-474c-a4cc-5dabbe1d24fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:18 embed-certs-239931 crio[726]: time="2024-07-08 21:10:18.180533118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce0f4fb108aad8b7e4d5f290e6c38ba959eaff10eb996db4ead860b3da656ffe,PodSandboxId:ffe9c0f59fe34ac7cb5f8a5eba4ecf639cc36b1ef8f9e207e5cfadefae60ca76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472476002732002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe38aa1-fac7-4517-9b33-76f04d2a2f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 56f73b55,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9908f4f99d81652e5638627904e4a861913b81b85f94b5530d7b3eb98fc2c22d,PodSandboxId:a5db9a7e39014ba86f9ff76f744cafba01f3b73c4d3ecc827a95ebe36cd3339d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475436591795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e42c3f-d8a8-4907-b08d-ada6919b55c9,},Annotations:map[string]string{io.kubernetes.container.hash: dc8d0052,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147522b6da453cc658fcf803ab092f1f01ec6299c39beb49ed8aea8fb39183f2,PodSandboxId:2eba620a756036dea40572b4991f9d2e2fecc452569c6a7411509043777e0cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475313095981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l9xmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2723e6e-5bce-43ed-abdb-63120212456f,},Annotations:map[string]string{io.kubernetes.container.hash: d9faa6cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97ff111abfe87fd7f3ae2693205979802fb796c7a252ac101182b0b9045d31f,PodSandboxId:2783ac8e694caf272447f415c358283082e3dcc84c1b1f96c7ab834304944aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1720472474690386153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkvf6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f5061c-fd24-42eb-97b4-e5ec5f57c325,},Annotations:map[string]string{io.kubernetes.container.hash: 85f22e48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd8b4dd934547918e6dd0265b5ab59c0c042fe802122b6dde6fb56c7525b3086,PodSandboxId:d6cdd9e57c5921ad5bdedfa19b2b18a1d993896cde4cf367957b7b0d90367a51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472454513704806,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5915f06682f25360235a0571bf07fcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1751db812059a2b25558db47e64e54db874fc689eaf21c9b94155e5cc6b8ee,PodSandboxId:a0bcc0d0f828fc731627af5ccec3acfbfea977382862bd79796061b5ee3f381e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472454505998034,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017464a8eb9372d81943b1e895114a89,},Annotations:map[string]string{io.kubernetes.container.hash: 9bc29772,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3f064e707e3b8a1df2cecb502630c714a064fa2de639369fd830edb62267c4,PodSandboxId:ac1107c2f5394188a8e9f5bd7236c7285780827027e893ea96e1638362fed98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472454487646172,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182433845698355cb350e0fe26b6032e,},Annotations:map[string]string{io.kubernetes.container.hash: a3816144,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99f8e5897ef06e4cad24cdd6d8f7c18a5b9d5637d7c6312b2816614ae7acb3d,PodSandboxId:cfd2e404c415fef9271b92134f9e0cb1919030310264b87584e9ffd5d9258330,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472454510272686,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a52823041510db1c9cec0ed257a7c73,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=716873ba-ff2d-474c-a4cc-5dabbe1d24fa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ce0f4fb108aad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ffe9c0f59fe34       storage-provisioner
	9908f4f99d816       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   a5db9a7e39014       coredns-7db6d8ff4d-qbqkx
	147522b6da453       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2eba620a75603       coredns-7db6d8ff4d-l9xmm
	c97ff111abfe8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   9 minutes ago       Running             kube-proxy                0                   2783ac8e694ca       kube-proxy-vkvf6
	cd8b4dd934547       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   9 minutes ago       Running             kube-scheduler            2                   d6cdd9e57c592       kube-scheduler-embed-certs-239931
	d99f8e5897ef0       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   9 minutes ago       Running             kube-controller-manager   2                   cfd2e404c415f       kube-controller-manager-embed-certs-239931
	5c1751db81205       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   9 minutes ago       Running             kube-apiserver            2                   a0bcc0d0f828f       kube-apiserver-embed-certs-239931
	1b3f064e707e3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   ac1107c2f5394       etcd-embed-certs-239931
	
	
	==> coredns [147522b6da453cc658fcf803ab092f1f01ec6299c39beb49ed8aea8fb39183f2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9908f4f99d81652e5638627904e4a861913b81b85f94b5530d7b3eb98fc2c22d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-239931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-239931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=embed-certs-239931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T21_01_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 21:00:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-239931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 21:10:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 21:06:26 +0000   Mon, 08 Jul 2024 21:00:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 21:06:26 +0000   Mon, 08 Jul 2024 21:00:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 21:06:26 +0000   Mon, 08 Jul 2024 21:00:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 21:06:26 +0000   Mon, 08 Jul 2024 21:00:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.126
	  Hostname:    embed-certs-239931
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b0035653c9b0423ebfd272e326ad42bb
	  System UUID:                b0035653-c9b0-423e-bfd2-72e326ad42bb
	  Boot ID:                    1bcf4981-2530-463c-acb0-0ffab41f1d26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-l9xmm                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 coredns-7db6d8ff4d-qbqkx                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 etcd-embed-certs-239931                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-embed-certs-239931             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-239931    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-vkvf6                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-embed-certs-239931             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-569cc877fc-f2dkn               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m2s   kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node embed-certs-239931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node embed-certs-239931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node embed-certs-239931 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m5s   node-controller  Node embed-certs-239931 event: Registered Node embed-certs-239931 in Controller
	
	
	==> dmesg <==
	[  +0.051333] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041301] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.548936] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.249132] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.613290] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.871224] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.063897] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060084] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.201411] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.141133] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.286859] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.511273] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.065048] kauditd_printk_skb: 130 callbacks suppressed
	[Jul 8 20:56] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +5.593021] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.026092] kauditd_printk_skb: 84 callbacks suppressed
	[Jul 8 21:00] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.762065] systemd-fstab-generator[3558]: Ignoring "noauto" option for root device
	[  +4.732631] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.351170] systemd-fstab-generator[3885]: Ignoring "noauto" option for root device
	[Jul 8 21:01] systemd-fstab-generator[4086]: Ignoring "noauto" option for root device
	[  +0.109028] kauditd_printk_skb: 14 callbacks suppressed
	[Jul 8 21:02] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [1b3f064e707e3b8a1df2cecb502630c714a064fa2de639369fd830edb62267c4] <==
	{"level":"info","ts":"2024-07-08T21:00:54.931421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 switched to configuration voters=(2618468096595348661)"}
	{"level":"info","ts":"2024-07-08T21:00:54.933077Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c6330389cea17d04","local-member-id":"2456aadc51424cb5","added-peer-id":"2456aadc51424cb5","added-peer-peer-urls":["https://192.168.61.126:2380"]}
	{"level":"info","ts":"2024-07-08T21:00:54.94394Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T21:00:54.944001Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.126:2380"}
	{"level":"info","ts":"2024-07-08T21:00:54.944182Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.126:2380"}
	{"level":"info","ts":"2024-07-08T21:00:54.954796Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T21:00:54.954712Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2456aadc51424cb5","initial-advertise-peer-urls":["https://192.168.61.126:2380"],"listen-peer-urls":["https://192.168.61.126:2380"],"advertise-client-urls":["https://192.168.61.126:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.126:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T21:00:55.740844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-08T21:00:55.740895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-08T21:00:55.740936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 received MsgPreVoteResp from 2456aadc51424cb5 at term 1"}
	{"level":"info","ts":"2024-07-08T21:00:55.74095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 became candidate at term 2"}
	{"level":"info","ts":"2024-07-08T21:00:55.740955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 received MsgVoteResp from 2456aadc51424cb5 at term 2"}
	{"level":"info","ts":"2024-07-08T21:00:55.740978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 became leader at term 2"}
	{"level":"info","ts":"2024-07-08T21:00:55.741002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2456aadc51424cb5 elected leader 2456aadc51424cb5 at term 2"}
	{"level":"info","ts":"2024-07-08T21:00:55.745018Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2456aadc51424cb5","local-member-attributes":"{Name:embed-certs-239931 ClientURLs:[https://192.168.61.126:2379]}","request-path":"/0/members/2456aadc51424cb5/attributes","cluster-id":"c6330389cea17d04","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T21:00:55.745072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T21:00:55.745409Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:00:55.745798Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T21:00:55.75161Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.126:2379"}
	{"level":"info","ts":"2024-07-08T21:00:55.751719Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c6330389cea17d04","local-member-id":"2456aadc51424cb5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:00:55.751845Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:00:55.751882Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:00:55.755639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T21:00:55.763802Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T21:00:55.763848Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:10:18 up 14 min,  0 users,  load average: 0.00, 0.09, 0.11
	Linux embed-certs-239931 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5c1751db812059a2b25558db47e64e54db874fc689eaf21c9b94155e5cc6b8ee] <==
	I0708 21:04:16.252085       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:05:57.476275       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:05:57.476556       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0708 21:05:58.477679       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:05:58.477793       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:05:58.477807       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:05:58.477920       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:05:58.478011       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:05:58.478994       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:06:58.478016       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:06:58.478105       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:06:58.478115       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:06:58.479357       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:06:58.479427       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:06:58.479469       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:08:58.478802       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:08:58.478898       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:08:58.478908       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:08:58.479989       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:08:58.480183       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:08:58.480241       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d99f8e5897ef06e4cad24cdd6d8f7c18a5b9d5637d7c6312b2816614ae7acb3d] <==
	I0708 21:04:43.841679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:05:13.284834       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:05:13.853822       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:05:43.290386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:05:43.862533       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:06:13.296185       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:06:13.874020       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:06:43.301529       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:06:43.882201       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:07:13.308385       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:07:13.894240       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0708 21:07:14.539881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="283.502µs"
	I0708 21:07:26.536615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="160.119µs"
	E0708 21:07:43.313519       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:07:43.902948       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:08:13.319471       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:08:13.914971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:08:43.324702       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:08:43.923316       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:09:13.330416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:09:13.932300       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:09:43.335216       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:09:43.940567       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:10:13.340793       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:10:13.949218       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c97ff111abfe87fd7f3ae2693205979802fb796c7a252ac101182b0b9045d31f] <==
	I0708 21:01:16.006453       1 server_linux.go:69] "Using iptables proxy"
	I0708 21:01:16.032946       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.126"]
	I0708 21:01:16.212106       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 21:01:16.212331       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 21:01:16.212694       1 server_linux.go:165] "Using iptables Proxier"
	I0708 21:01:16.230866       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 21:01:16.231489       1 server.go:872] "Version info" version="v1.30.2"
	I0708 21:01:16.231537       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 21:01:16.236829       1 config.go:319] "Starting node config controller"
	I0708 21:01:16.236919       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 21:01:16.238031       1 config.go:192] "Starting service config controller"
	I0708 21:01:16.238628       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 21:01:16.239000       1 config.go:101] "Starting endpoint slice config controller"
	I0708 21:01:16.239051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 21:01:16.337978       1 shared_informer.go:320] Caches are synced for node config
	I0708 21:01:16.339175       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 21:01:16.339350       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [cd8b4dd934547918e6dd0265b5ab59c0c042fe802122b6dde6fb56c7525b3086] <==
	W0708 21:00:57.500844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 21:00:57.500873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 21:00:58.312117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 21:00:58.312248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 21:00:58.359812       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 21:00:58.359912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 21:00:58.438607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 21:00:58.438658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 21:00:58.538948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 21:00:58.538978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 21:00:58.553269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 21:00:58.553476       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 21:00:58.580239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 21:00:58.580788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 21:00:58.580535       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 21:00:58.580944       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 21:00:58.593164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 21:00:58.593214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 21:00:58.598826       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0708 21:00:58.598879       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0708 21:00:58.690185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 21:00:58.690398       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 21:00:58.760082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 21:00:58.760191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0708 21:01:00.585252       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 21:08:00 embed-certs-239931 kubelet[3891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:08:00 embed-certs-239931 kubelet[3891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:08:00 embed-certs-239931 kubelet[3891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:08:00 embed-certs-239931 kubelet[3891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:08:05 embed-certs-239931 kubelet[3891]: E0708 21:08:05.519732    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:08:17 embed-certs-239931 kubelet[3891]: E0708 21:08:17.519129    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:08:31 embed-certs-239931 kubelet[3891]: E0708 21:08:31.519502    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:08:43 embed-certs-239931 kubelet[3891]: E0708 21:08:43.519132    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:08:55 embed-certs-239931 kubelet[3891]: E0708 21:08:55.520086    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:09:00 embed-certs-239931 kubelet[3891]: E0708 21:09:00.539973    3891 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:09:00 embed-certs-239931 kubelet[3891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:09:00 embed-certs-239931 kubelet[3891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:09:00 embed-certs-239931 kubelet[3891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:09:00 embed-certs-239931 kubelet[3891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:09:10 embed-certs-239931 kubelet[3891]: E0708 21:09:10.520695    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:09:23 embed-certs-239931 kubelet[3891]: E0708 21:09:23.518039    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:09:36 embed-certs-239931 kubelet[3891]: E0708 21:09:36.519842    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:09:48 embed-certs-239931 kubelet[3891]: E0708 21:09:48.518839    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:10:00 embed-certs-239931 kubelet[3891]: E0708 21:10:00.535644    3891 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:10:00 embed-certs-239931 kubelet[3891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:10:00 embed-certs-239931 kubelet[3891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:10:00 embed-certs-239931 kubelet[3891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:10:00 embed-certs-239931 kubelet[3891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:10:02 embed-certs-239931 kubelet[3891]: E0708 21:10:02.519346    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:10:15 embed-certs-239931 kubelet[3891]: E0708 21:10:15.519013    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	
	
	==> storage-provisioner [ce0f4fb108aad8b7e4d5f290e6c38ba959eaff10eb996db4ead860b3da656ffe] <==
	I0708 21:01:16.218860       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 21:01:16.245377       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 21:01:16.245501       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 21:01:16.268137       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 21:01:16.270696       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-239931_21cca3f3-9f2a-4eca-bab0-e680410695f3!
	I0708 21:01:16.272271       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a99a8de8-7120-4951-95cc-51036a51cc59", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-239931_21cca3f3-9f2a-4eca-bab0-e680410695f3 became leader
	I0708 21:01:16.371621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-239931_21cca3f3-9f2a-4eca-bab0-e680410695f3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-239931 -n embed-certs-239931
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-239931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-f2dkn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-239931 describe pod metrics-server-569cc877fc-f2dkn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-239931 describe pod metrics-server-569cc877fc-f2dkn: exit status 1 (67.215254ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-f2dkn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-239931 describe pod metrics-server-569cc877fc-f2dkn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-028021 -n no-preload-028021
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-08 21:10:28.850402248 +0000 UTC m=+6088.763280051
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028021 -n no-preload-028021
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-028021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-028021 logs -n 25: (1.564257413s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-897827                                        | pause-897827                 | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:46 UTC |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| ssh     | cert-options-059722 ssh                                | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-059722 -- sudo                         | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-059722                                 | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-028021             | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-914355             | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-239931            | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-733920 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-733920                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:50 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028021                  | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071971  | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-239931                 | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071971       | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC | 08 Jul 24 21:01 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:53:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:53:37.291760   59655 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:53:37.291847   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.291851   59655 out.go:304] Setting ErrFile to fd 2...
	I0708 20:53:37.291855   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.292047   59655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:53:37.292558   59655 out.go:298] Setting JSON to false
	I0708 20:53:37.293434   59655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5766,"bootTime":1720466251,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:53:37.293485   59655 start.go:139] virtualization: kvm guest
	I0708 20:53:37.296412   59655 out.go:177] * [default-k8s-diff-port-071971] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:53:37.297727   59655 notify.go:220] Checking for updates...
	I0708 20:53:37.297756   59655 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:53:37.299168   59655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:53:37.300541   59655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:53:37.301818   59655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:53:37.303117   59655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:53:37.304266   59655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:53:37.305793   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:53:37.306182   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.306236   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.321719   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I0708 20:53:37.322090   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.322593   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.322617   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.322908   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.323093   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.323329   59655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:53:37.323638   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.323679   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.338244   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0708 20:53:37.338660   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.339118   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.339144   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.339463   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.339735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.374356   59655 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:53:37.375714   59655 start.go:297] selected driver: kvm2
	I0708 20:53:37.375729   59655 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.375866   59655 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:53:37.376843   59655 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.376918   59655 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:53:37.391219   59655 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:53:37.391602   59655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:53:37.391659   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:53:37.391672   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:53:37.391707   59655 start.go:340] cluster config:
	{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.391797   59655 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.393453   59655 out.go:177] * Starting "default-k8s-diff-port-071971" primary control-plane node in "default-k8s-diff-port-071971" cluster
	I0708 20:53:37.923695   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:40.995762   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:37.394736   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:53:37.394768   59655 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 20:53:37.394777   59655 cache.go:56] Caching tarball of preloaded images
	I0708 20:53:37.394849   59655 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:53:37.394860   59655 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 20:53:37.394962   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:53:37.395154   59655 start.go:360] acquireMachinesLock for default-k8s-diff-port-071971: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:53:47.075721   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:50.147727   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:56.227766   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:59.299738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:05.379699   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:08.451749   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:14.531759   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:17.603688   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:23.683730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:26.755738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:32.835706   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:35.907702   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:41.987722   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:45.059873   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:51.139726   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:54.211797   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:00.291730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:03.363720   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:09.443741   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:12.515718   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.358315   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:55:19.358408   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:55:19.359948   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:55:19.360000   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:55:19.360076   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:55:19.360217   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:55:19.360354   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:55:19.360443   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:55:19.362594   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:55:19.362671   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:55:19.362761   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:55:19.362915   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:55:19.362997   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:55:19.363087   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:55:19.363181   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:55:19.363271   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:55:19.363360   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:55:19.363470   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:55:19.363582   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:55:19.363636   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:55:19.363711   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:55:19.363781   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:55:19.363852   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:55:19.363941   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:55:19.364010   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:55:19.364135   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:55:19.364226   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:55:19.364276   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:55:19.364342   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:55:18.595786   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.366132   57466 out.go:204]   - Booting up control plane ...
	I0708 20:55:19.366219   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:55:19.366301   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:55:19.366364   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:55:19.366433   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:55:19.366579   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:55:19.366629   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:55:19.366692   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.366846   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.366909   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367070   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367133   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367285   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367344   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367511   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367575   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367735   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367743   57466 kubeadm.go:309] 
	I0708 20:55:19.367783   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:55:19.367817   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:55:19.367823   57466 kubeadm.go:309] 
	I0708 20:55:19.367851   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:55:19.367888   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:55:19.367991   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:55:19.368009   57466 kubeadm.go:309] 
	I0708 20:55:19.368127   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:55:19.368164   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:55:19.368192   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:55:19.368198   57466 kubeadm.go:309] 
	I0708 20:55:19.368284   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:55:19.368355   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:55:19.368362   57466 kubeadm.go:309] 
	I0708 20:55:19.368455   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:55:19.368539   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:55:19.368606   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:55:19.368666   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:55:19.368673   57466 kubeadm.go:309] 
	W0708 20:55:19.368784   57466 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0708 20:55:19.368831   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 20:55:19.838778   57466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:55:19.853958   57466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:55:19.863986   57466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:55:19.864010   57466 kubeadm.go:156] found existing configuration files:
	
	I0708 20:55:19.864055   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:55:19.873085   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:55:19.873147   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:55:19.882654   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:55:19.891579   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:55:19.891634   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:55:19.901397   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.910901   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:55:19.910976   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.920599   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:55:19.929826   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:55:19.929891   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:55:19.939284   57466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 20:55:20.153136   57466 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 20:55:21.667700   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:27.747756   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:30.819712   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:33.824320   59107 start.go:364] duration metric: took 3m48.54985296s to acquireMachinesLock for "embed-certs-239931"
	I0708 20:55:33.824375   59107 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:33.824390   59107 fix.go:54] fixHost starting: 
	I0708 20:55:33.824700   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:33.824728   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:33.839554   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0708 20:55:33.839987   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:33.840472   59107 main.go:141] libmachine: Using API Version  1
	I0708 20:55:33.840495   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:33.840844   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:33.841030   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:33.841194   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 20:55:33.842597   59107 fix.go:112] recreateIfNeeded on embed-certs-239931: state=Stopped err=<nil>
	I0708 20:55:33.842627   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	W0708 20:55:33.842787   59107 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:33.844574   59107 out.go:177] * Restarting existing kvm2 VM for "embed-certs-239931" ...
	I0708 20:55:33.845674   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Start
	I0708 20:55:33.845858   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring networks are active...
	I0708 20:55:33.846607   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network default is active
	I0708 20:55:33.846907   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network mk-embed-certs-239931 is active
	I0708 20:55:33.847329   59107 main.go:141] libmachine: (embed-certs-239931) Getting domain xml...
	I0708 20:55:33.848120   59107 main.go:141] libmachine: (embed-certs-239931) Creating domain...
	I0708 20:55:35.057523   59107 main.go:141] libmachine: (embed-certs-239931) Waiting to get IP...
	I0708 20:55:35.058300   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.058841   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.058870   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.058773   60083 retry.go:31] will retry after 280.969113ms: waiting for machine to come up
	I0708 20:55:33.821580   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:33.821617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.821932   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:55:33.821957   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.822166   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:55:33.824193   58678 machine.go:97] duration metric: took 4m37.421469682s to provisionDockerMachine
	I0708 20:55:33.824234   58678 fix.go:56] duration metric: took 4m37.444794791s for fixHost
	I0708 20:55:33.824241   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 4m37.44481517s
	W0708 20:55:33.824262   58678 start.go:713] error starting host: provision: host is not running
	W0708 20:55:33.824343   58678 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0708 20:55:33.824352   58678 start.go:728] Will try again in 5 seconds ...
	I0708 20:55:35.341327   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.341861   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.341882   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.341837   60083 retry.go:31] will retry after 333.972717ms: waiting for machine to come up
	I0708 20:55:35.677531   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.678035   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.678066   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.677984   60083 retry.go:31] will retry after 387.46652ms: waiting for machine to come up
	I0708 20:55:36.066618   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.067079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.067106   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.067044   60083 retry.go:31] will retry after 523.369614ms: waiting for machine to come up
	I0708 20:55:36.591863   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.592337   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.592363   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.592295   60083 retry.go:31] will retry after 670.675561ms: waiting for machine to come up
	I0708 20:55:37.264084   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:37.264521   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:37.264565   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:37.264485   60083 retry.go:31] will retry after 775.348922ms: waiting for machine to come up
	I0708 20:55:38.041398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:38.041860   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:38.041885   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:38.041801   60083 retry.go:31] will retry after 1.135585711s: waiting for machine to come up
	I0708 20:55:39.179405   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:39.179910   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:39.179938   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:39.179867   60083 retry.go:31] will retry after 1.422689354s: waiting for machine to come up
	I0708 20:55:38.826037   58678 start.go:360] acquireMachinesLock for no-preload-028021: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:55:40.603811   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:40.604240   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:40.604261   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:40.604199   60083 retry.go:31] will retry after 1.640612147s: waiting for machine to come up
	I0708 20:55:42.247230   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:42.247797   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:42.247837   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:42.247733   60083 retry.go:31] will retry after 2.031069729s: waiting for machine to come up
	I0708 20:55:44.280878   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:44.281419   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:44.281451   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:44.281355   60083 retry.go:31] will retry after 2.394813785s: waiting for machine to come up
	I0708 20:55:46.678897   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:46.679398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:46.679430   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:46.679357   60083 retry.go:31] will retry after 2.419242459s: waiting for machine to come up
	I0708 20:55:49.100362   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:49.100901   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:49.100964   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:49.100858   60083 retry.go:31] will retry after 4.241202363s: waiting for machine to come up
	I0708 20:55:54.868873   59655 start.go:364] duration metric: took 2m17.473689428s to acquireMachinesLock for "default-k8s-diff-port-071971"
	I0708 20:55:54.868970   59655 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:54.868991   59655 fix.go:54] fixHost starting: 
	I0708 20:55:54.869400   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:54.869432   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:54.888853   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44159
	I0708 20:55:54.889234   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:54.889674   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:55:54.889698   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:54.890009   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:54.890196   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:55:54.890332   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 20:55:54.891932   59655 fix.go:112] recreateIfNeeded on default-k8s-diff-port-071971: state=Stopped err=<nil>
	I0708 20:55:54.891972   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	W0708 20:55:54.892120   59655 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:54.894439   59655 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071971" ...
	I0708 20:55:53.347154   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.347587   59107 main.go:141] libmachine: (embed-certs-239931) Found IP for machine: 192.168.61.126
	I0708 20:55:53.347601   59107 main.go:141] libmachine: (embed-certs-239931) Reserving static IP address...
	I0708 20:55:53.347612   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has current primary IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.348084   59107 main.go:141] libmachine: (embed-certs-239931) Reserved static IP address: 192.168.61.126
	I0708 20:55:53.348106   59107 main.go:141] libmachine: (embed-certs-239931) Waiting for SSH to be available...
	I0708 20:55:53.348119   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.348136   59107 main.go:141] libmachine: (embed-certs-239931) DBG | skip adding static IP to network mk-embed-certs-239931 - found existing host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"}
	I0708 20:55:53.348148   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Getting to WaitForSSH function...
	I0708 20:55:53.350167   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350545   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.350583   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350651   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH client type: external
	I0708 20:55:53.350675   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa (-rw-------)
	I0708 20:55:53.350704   59107 main.go:141] libmachine: (embed-certs-239931) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:55:53.350722   59107 main.go:141] libmachine: (embed-certs-239931) DBG | About to run SSH command:
	I0708 20:55:53.350736   59107 main.go:141] libmachine: (embed-certs-239931) DBG | exit 0
	I0708 20:55:53.479934   59107 main.go:141] libmachine: (embed-certs-239931) DBG | SSH cmd err, output: <nil>: 
	I0708 20:55:53.480309   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetConfigRaw
	I0708 20:55:53.480891   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.483079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483399   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.483424   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483740   59107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/config.json ...
	I0708 20:55:53.483920   59107 machine.go:94] provisionDockerMachine start ...
	I0708 20:55:53.483937   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:53.484172   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.486461   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486772   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.486793   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.487075   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487253   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487385   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.487556   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.487774   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.487786   59107 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:55:53.600047   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:55:53.600078   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600308   59107 buildroot.go:166] provisioning hostname "embed-certs-239931"
	I0708 20:55:53.600342   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600508   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.603107   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603498   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.603529   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603728   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.603954   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604345   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.604512   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.604716   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.604737   59107 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-239931 && echo "embed-certs-239931" | sudo tee /etc/hostname
	I0708 20:55:53.734414   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-239931
	
	I0708 20:55:53.734457   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.737117   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737473   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.737501   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737640   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.737852   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738184   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.738355   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.738536   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.738558   59107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-239931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-239931/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-239931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:55:53.860753   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:53.860781   59107 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:55:53.860799   59107 buildroot.go:174] setting up certificates
	I0708 20:55:53.860808   59107 provision.go:84] configureAuth start
	I0708 20:55:53.860816   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.861070   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.863652   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.863999   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.864018   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.864221   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.866207   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866480   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.866504   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866613   59107 provision.go:143] copyHostCerts
	I0708 20:55:53.866671   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:55:53.866680   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:55:53.866741   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:55:53.866837   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:55:53.866845   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:55:53.866868   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:55:53.866932   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:55:53.866939   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:55:53.866959   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:55:53.867017   59107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.embed-certs-239931 san=[127.0.0.1 192.168.61.126 embed-certs-239931 localhost minikube]
	I0708 20:55:54.171765   59107 provision.go:177] copyRemoteCerts
	I0708 20:55:54.171835   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:55:54.171859   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.174341   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174621   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.174650   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174762   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.174957   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.175129   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.175280   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.262051   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:55:54.287118   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:55:54.310071   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:55:54.337811   59107 provision.go:87] duration metric: took 476.990356ms to configureAuth
	I0708 20:55:54.337851   59107 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:55:54.338077   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:55:54.338147   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.340972   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341259   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.341296   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341423   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.341720   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.341870   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.342006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.342147   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.342332   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.342350   59107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:55:54.618752   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:55:54.618775   59107 machine.go:97] duration metric: took 1.134844127s to provisionDockerMachine
	I0708 20:55:54.618786   59107 start.go:293] postStartSetup for "embed-certs-239931" (driver="kvm2")
	I0708 20:55:54.618795   59107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:55:54.618823   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.619220   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:55:54.619249   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.621857   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622144   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.622168   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622348   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.622532   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.622703   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.622853   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.710096   59107 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:55:54.714437   59107 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:55:54.714458   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:55:54.714524   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:55:54.714597   59107 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:55:54.714679   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:55:54.724350   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:55:54.748078   59107 start.go:296] duration metric: took 129.264358ms for postStartSetup
	I0708 20:55:54.748120   59107 fix.go:56] duration metric: took 20.923736253s for fixHost
	I0708 20:55:54.748138   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.750818   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751200   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.751227   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751377   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.751611   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751763   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751879   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.752034   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.752240   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.752256   59107 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:55:54.868663   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472154.844724958
	
	I0708 20:55:54.868694   59107 fix.go:216] guest clock: 1720472154.844724958
	I0708 20:55:54.868706   59107 fix.go:229] Guest: 2024-07-08 20:55:54.844724958 +0000 UTC Remote: 2024-07-08 20:55:54.748123056 +0000 UTC m=+249.617599643 (delta=96.601902ms)
	I0708 20:55:54.868764   59107 fix.go:200] guest clock delta is within tolerance: 96.601902ms
	I0708 20:55:54.868776   59107 start.go:83] releasing machines lock for "embed-certs-239931", held for 21.044425411s
	I0708 20:55:54.868811   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.869092   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:54.871867   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872252   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.872295   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872451   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.872921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873060   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873151   59107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:55:54.873196   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.873271   59107 ssh_runner.go:195] Run: cat /version.json
	I0708 20:55:54.873297   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.876118   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876142   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876614   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876641   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876682   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876699   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.876903   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.877017   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877193   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877266   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877349   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.877424   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.984516   59107 ssh_runner.go:195] Run: systemctl --version
	I0708 20:55:54.990926   59107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:55:55.142010   59107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:55:55.148138   59107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:55:55.148203   59107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:55:55.164086   59107 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:55:55.164111   59107 start.go:494] detecting cgroup driver to use...
	I0708 20:55:55.164204   59107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:55:55.184836   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:55:55.204002   59107 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:55:55.204079   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:55:55.218405   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:55:55.233462   59107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:55:55.357278   59107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:55:55.521141   59107 docker.go:233] disabling docker service ...
	I0708 20:55:55.521218   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:55:55.538949   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:55:55.558613   59107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:55:55.696926   59107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:55:55.819810   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:55:55.837012   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:55:55.856417   59107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:55:55.856497   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.868488   59107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:55:55.868556   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.879503   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.891183   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.901872   59107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:55:55.914498   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.925676   59107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.944340   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.955961   59107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:55:55.965785   59107 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:55:55.965845   59107 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:55:55.979853   59107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:55:55.989331   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:55:56.108476   59107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:55:56.262396   59107 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:55:56.262463   59107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:55:56.267682   59107 start.go:562] Will wait 60s for crictl version
	I0708 20:55:56.267740   59107 ssh_runner.go:195] Run: which crictl
	I0708 20:55:56.273115   59107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:55:56.323276   59107 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:55:56.323364   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.352650   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.394502   59107 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:55:54.895951   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Start
	I0708 20:55:54.896150   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring networks are active...
	I0708 20:55:54.896971   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network default is active
	I0708 20:55:54.897344   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network mk-default-k8s-diff-port-071971 is active
	I0708 20:55:54.897672   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Getting domain xml...
	I0708 20:55:54.898340   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Creating domain...
	I0708 20:55:56.182187   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting to get IP...
	I0708 20:55:56.183209   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183699   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183759   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.183663   60221 retry.go:31] will retry after 255.382138ms: waiting for machine to come up
	I0708 20:55:56.441290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441760   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441789   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.441718   60221 retry.go:31] will retry after 363.116234ms: waiting for machine to come up
	I0708 20:55:56.806418   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806871   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806899   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.806819   60221 retry.go:31] will retry after 392.319836ms: waiting for machine to come up
	I0708 20:55:57.200645   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201176   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.201095   60221 retry.go:31] will retry after 528.490844ms: waiting for machine to come up
	I0708 20:55:56.395778   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:56.398458   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.398826   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:56.398853   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.399088   59107 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0708 20:55:56.403789   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:55:56.418081   59107 kubeadm.go:877] updating cluster {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:55:56.418244   59107 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:55:56.418312   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:55:56.459969   59107 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:55:56.460034   59107 ssh_runner.go:195] Run: which lz4
	I0708 20:55:56.464561   59107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:55:56.469087   59107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:55:56.469130   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:55:58.010716   59107 crio.go:462] duration metric: took 1.546186223s to copy over tarball
	I0708 20:55:58.010782   59107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:55:57.731640   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732172   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732223   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.732129   60221 retry.go:31] will retry after 554.611559ms: waiting for machine to come up
	I0708 20:55:58.287924   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288557   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.288491   60221 retry.go:31] will retry after 642.466107ms: waiting for machine to come up
	I0708 20:55:58.932485   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933032   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.932958   60221 retry.go:31] will retry after 999.83146ms: waiting for machine to come up
	I0708 20:55:59.934050   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934618   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934664   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:59.934571   60221 retry.go:31] will retry after 1.069868254s: waiting for machine to come up
	I0708 20:56:01.006049   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006594   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:01.006506   60221 retry.go:31] will retry after 1.182777891s: waiting for machine to come up
	I0708 20:56:02.191001   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191460   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191483   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:02.191418   60221 retry.go:31] will retry after 1.559742627s: waiting for machine to come up
	I0708 20:56:00.267199   59107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256392679s)
	I0708 20:56:00.267230   59107 crio.go:469] duration metric: took 2.256489175s to extract the tarball
	I0708 20:56:00.267240   59107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:00.305692   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:00.346669   59107 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:00.346694   59107 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:00.346703   59107 kubeadm.go:928] updating node { 192.168.61.126 8443 v1.30.2 crio true true} ...
	I0708 20:56:00.346804   59107 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-239931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:00.346868   59107 ssh_runner.go:195] Run: crio config
	I0708 20:56:00.392577   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:00.392597   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:00.392608   59107 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:00.392637   59107 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-239931 NodeName:embed-certs-239931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:00.392814   59107 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-239931"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:00.392894   59107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:00.403593   59107 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:00.403675   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:00.413449   59107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0708 20:56:00.430407   59107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:00.447599   59107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0708 20:56:00.465525   59107 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:00.469912   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:00.483255   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:00.623802   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:00.642946   59107 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931 for IP: 192.168.61.126
	I0708 20:56:00.642967   59107 certs.go:194] generating shared ca certs ...
	I0708 20:56:00.642982   59107 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:00.643143   59107 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:00.643184   59107 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:00.643193   59107 certs.go:256] generating profile certs ...
	I0708 20:56:00.643270   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/client.key
	I0708 20:56:00.643317   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key.7743ab88
	I0708 20:56:00.643354   59107 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key
	I0708 20:56:00.643487   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:00.643524   59107 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:00.643533   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:00.643556   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:00.643579   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:00.643604   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:00.643670   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:00.644353   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:00.699260   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:00.752536   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:00.783946   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:00.812524   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0708 20:56:00.843035   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:00.872061   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:00.898805   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 20:56:00.925402   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:00.952114   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:00.984067   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:01.010037   59107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:01.027599   59107 ssh_runner.go:195] Run: openssl version
	I0708 20:56:01.033942   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:01.046273   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051807   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051887   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.058482   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:01.070774   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:01.083215   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088389   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088460   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.094594   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:01.107360   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:01.119973   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125011   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125085   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.131596   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:01.143993   59107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:01.149299   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:01.156201   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:01.162939   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:01.169874   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:01.176264   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:01.182905   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:01.189961   59107 kubeadm.go:391] StartCluster: {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:01.190041   59107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:01.190085   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.238097   59107 cri.go:89] found id: ""
	I0708 20:56:01.238167   59107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:01.250478   59107 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:01.250503   59107 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:01.250509   59107 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:01.250562   59107 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:01.261664   59107 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:01.262667   59107 kubeconfig.go:125] found "embed-certs-239931" server: "https://192.168.61.126:8443"
	I0708 20:56:01.264788   59107 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:01.275846   59107 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.126
	I0708 20:56:01.275888   59107 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:01.275908   59107 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:01.276006   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.318646   59107 cri.go:89] found id: ""
	I0708 20:56:01.318745   59107 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:01.340273   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:01.353325   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:01.353360   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:01.353412   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:01.363659   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:01.363732   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:01.374340   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:01.384284   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:01.384352   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:01.394981   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.405532   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:01.405604   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.416741   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:01.427724   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:01.427812   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:01.439352   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:01.451286   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:01.581829   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.013995   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.432133224s)
	I0708 20:56:03.014024   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.229195   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.305328   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.415409   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:03.415508   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:03.916187   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.416389   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.489450   59107 api_server.go:72] duration metric: took 1.074041899s to wait for apiserver process to appear ...
	I0708 20:56:04.489482   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:04.489516   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:04.490133   59107 api_server.go:269] stopped: https://192.168.61.126:8443/healthz: Get "https://192.168.61.126:8443/healthz": dial tcp 192.168.61.126:8443: connect: connection refused
	I0708 20:56:04.989698   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:03.753446   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.753998   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.754026   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:03.753954   60221 retry.go:31] will retry after 1.922949894s: waiting for machine to come up
	I0708 20:56:05.679244   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679831   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679860   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:05.679794   60221 retry.go:31] will retry after 3.531627288s: waiting for machine to come up
	I0708 20:56:07.673375   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:07.673404   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:07.673420   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.776516   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.776551   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:07.989668   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.996865   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.996897   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.490538   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:08.496342   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:08.496374   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.990583   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:09.001043   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 20:56:09.011126   59107 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:09.011160   59107 api_server.go:131] duration metric: took 4.521668725s to wait for apiserver health ...
	I0708 20:56:09.011171   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:09.011179   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:09.012842   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:09.014197   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:09.041325   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:09.073110   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:09.086225   59107 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:09.086265   59107 system_pods.go:61] "coredns-7db6d8ff4d-wnqsl" [868e66bf-9f86-465f-aad1-d11a6d218ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:09.086276   59107 system_pods.go:61] "etcd-embed-certs-239931" [48815314-6e48-4fe0-b7b1-4a1d2f6610d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:09.086286   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [665311f4-d633-4b93-ae8c-2b43b45fff68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:09.086294   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [4ab6d657-8c74-491c-b965-ac68f2bd323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:09.086309   59107 system_pods.go:61] "kube-proxy-5h5xl" [9b169148-aa75-40a2-b08b-1d579ee179b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 20:56:09.086316   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [012399d8-10a4-407d-a899-3c840dd52ca8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:09.086331   59107 system_pods.go:61] "metrics-server-569cc877fc-h4btg" [c78cfc3c-159f-4a06-b4a0-63f8bd0a6703] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:09.086339   59107 system_pods.go:61] "storage-provisioner" [2ca0ea1d-5d1c-4e18-a871-e035a8946b3c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 20:56:09.086348   59107 system_pods.go:74] duration metric: took 13.216051ms to wait for pod list to return data ...
	I0708 20:56:09.086363   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:09.089689   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:09.089719   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:09.089732   59107 node_conditions.go:105] duration metric: took 3.363611ms to run NodePressure ...
	I0708 20:56:09.089751   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:09.377271   59107 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383148   59107 kubeadm.go:733] kubelet initialised
	I0708 20:56:09.383174   59107 kubeadm.go:734] duration metric: took 5.876526ms waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383183   59107 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:09.388903   59107 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:09.214856   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215410   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:09.215355   60221 retry.go:31] will retry after 3.64169465s: waiting for machine to come up
	I0708 20:56:14.180834   58678 start.go:364] duration metric: took 35.354748041s to acquireMachinesLock for "no-preload-028021"
	I0708 20:56:14.180893   58678 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:56:14.180905   58678 fix.go:54] fixHost starting: 
	I0708 20:56:14.181259   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:56:14.181299   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:56:14.197712   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I0708 20:56:14.198157   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:56:14.198615   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:56:14.198637   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:56:14.198996   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:56:14.199173   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:14.199342   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:56:14.200905   58678 fix.go:112] recreateIfNeeded on no-preload-028021: state=Stopped err=<nil>
	I0708 20:56:14.200930   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	W0708 20:56:14.201103   58678 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:56:14.203062   58678 out.go:177] * Restarting existing kvm2 VM for "no-preload-028021" ...
	I0708 20:56:11.396453   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:13.396672   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:12.860535   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.860988   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Found IP for machine: 192.168.72.163
	I0708 20:56:12.861010   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserving static IP address...
	I0708 20:56:12.861027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has current primary IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.861445   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.861473   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserved static IP address: 192.168.72.163
	I0708 20:56:12.861494   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | skip adding static IP to network mk-default-k8s-diff-port-071971 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"}
	I0708 20:56:12.861515   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Getting to WaitForSSH function...
	I0708 20:56:12.861531   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for SSH to be available...
	I0708 20:56:12.864099   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864436   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.864465   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864631   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH client type: external
	I0708 20:56:12.864663   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa (-rw-------)
	I0708 20:56:12.864693   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:12.864708   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | About to run SSH command:
	I0708 20:56:12.864721   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | exit 0
	I0708 20:56:12.996077   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:12.996459   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetConfigRaw
	I0708 20:56:12.997091   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:12.999431   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.999815   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.999844   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.000145   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:56:13.000354   59655 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:13.000377   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.000558   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.002898   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003255   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.003290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003444   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.003626   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003778   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003930   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.004094   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.004297   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.004311   59655 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:13.119929   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:13.119956   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120203   59655 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071971"
	I0708 20:56:13.120320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120544   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.123750   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.124256   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.124647   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124993   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.125155   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.125339   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.125360   59655 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071971 && echo "default-k8s-diff-port-071971" | sudo tee /etc/hostname
	I0708 20:56:13.256165   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071971
	
	I0708 20:56:13.256199   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.258991   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259342   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.259376   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.259828   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260011   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260149   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.260325   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.260506   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.260530   59655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071971/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:13.381593   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:13.381627   59655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:13.381684   59655 buildroot.go:174] setting up certificates
	I0708 20:56:13.381700   59655 provision.go:84] configureAuth start
	I0708 20:56:13.381716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.382023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:13.385065   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385358   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.385394   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385566   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.387752   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388092   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.388132   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388290   59655 provision.go:143] copyHostCerts
	I0708 20:56:13.388350   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:13.388361   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:13.388408   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:13.388506   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:13.388516   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:13.388536   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:13.388587   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:13.388593   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:13.388610   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:13.389123   59655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071971 san=[127.0.0.1 192.168.72.163 default-k8s-diff-port-071971 localhost minikube]
	I0708 20:56:13.445451   59655 provision.go:177] copyRemoteCerts
	I0708 20:56:13.445509   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:13.445536   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.448926   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449291   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.449320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449579   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.449785   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.449944   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.450097   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:13.542311   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0708 20:56:13.570585   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 20:56:13.597943   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:13.623837   59655 provision.go:87] duration metric: took 242.102893ms to configureAuth
	I0708 20:56:13.623874   59655 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:13.624077   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:13.624144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.626802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627247   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.627277   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627553   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.627738   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.627910   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.628047   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.628214   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.628414   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.628442   59655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:13.930321   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:13.930349   59655 machine.go:97] duration metric: took 929.979999ms to provisionDockerMachine
	I0708 20:56:13.930361   59655 start.go:293] postStartSetup for "default-k8s-diff-port-071971" (driver="kvm2")
	I0708 20:56:13.930371   59655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:13.930385   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.930714   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:13.930747   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.933397   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933704   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.933735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933927   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.934119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.934266   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.934393   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.019603   59655 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:14.024556   59655 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:14.024589   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:14.024651   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:14.024744   59655 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:14.024836   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:14.035798   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:14.062351   59655 start.go:296] duration metric: took 131.974167ms for postStartSetup
	I0708 20:56:14.062402   59655 fix.go:56] duration metric: took 19.193418124s for fixHost
	I0708 20:56:14.062428   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.065264   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.065784   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.065822   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.066027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.066271   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.066965   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:14.067197   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:14.067210   59655 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:14.180654   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472174.151879540
	
	I0708 20:56:14.180683   59655 fix.go:216] guest clock: 1720472174.151879540
	I0708 20:56:14.180695   59655 fix.go:229] Guest: 2024-07-08 20:56:14.15187954 +0000 UTC Remote: 2024-07-08 20:56:14.062408788 +0000 UTC m=+156.804206336 (delta=89.470752ms)
	I0708 20:56:14.180751   59655 fix.go:200] guest clock delta is within tolerance: 89.470752ms
	I0708 20:56:14.180757   59655 start.go:83] releasing machines lock for "default-k8s-diff-port-071971", held for 19.311816598s
	I0708 20:56:14.180802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.181119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:14.183833   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184164   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.184194   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184365   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.184862   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185029   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185105   59655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:14.185152   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.185222   59655 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:14.185248   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.187788   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188135   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188167   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188299   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188328   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188501   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188505   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188641   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.188715   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188803   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.188885   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.189022   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.298253   59655 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:14.305004   59655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:14.457540   59655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:14.464497   59655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:14.464567   59655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:14.482063   59655 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:14.482093   59655 start.go:494] detecting cgroup driver to use...
	I0708 20:56:14.482172   59655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:14.500206   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:14.515905   59655 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:14.515952   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:14.532277   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:14.552772   59655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:14.686229   59655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:14.845428   59655 docker.go:233] disabling docker service ...
	I0708 20:56:14.845496   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:14.863157   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:14.881174   59655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:15.029269   59655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:15.165105   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:15.181619   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:15.202743   59655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:15.202806   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.215848   59655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:15.215925   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.228697   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.240964   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.257002   59655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:15.270309   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.283215   59655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.303235   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.322364   59655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:15.340757   59655 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:15.340836   59655 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:15.360592   59655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:15.372486   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:15.510210   59655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:15.656090   59655 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:15.656169   59655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:15.661847   59655 start.go:562] Will wait 60s for crictl version
	I0708 20:56:15.661917   59655 ssh_runner.go:195] Run: which crictl
	I0708 20:56:15.666004   59655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:15.707842   59655 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:15.707928   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.740434   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.772450   59655 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:14.204596   58678 main.go:141] libmachine: (no-preload-028021) Calling .Start
	I0708 20:56:14.204780   58678 main.go:141] libmachine: (no-preload-028021) Ensuring networks are active...
	I0708 20:56:14.205463   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network default is active
	I0708 20:56:14.205799   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network mk-no-preload-028021 is active
	I0708 20:56:14.206280   58678 main.go:141] libmachine: (no-preload-028021) Getting domain xml...
	I0708 20:56:14.207187   58678 main.go:141] libmachine: (no-preload-028021) Creating domain...
	I0708 20:56:15.514100   58678 main.go:141] libmachine: (no-preload-028021) Waiting to get IP...
	I0708 20:56:15.514946   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.515419   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.515473   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.515397   60369 retry.go:31] will retry after 282.59763ms: waiting for machine to come up
	I0708 20:56:15.799976   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.800525   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.800555   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.800482   60369 retry.go:31] will retry after 377.094067ms: waiting for machine to come up
	I0708 20:56:16.179257   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.179953   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.179979   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.179861   60369 retry.go:31] will retry after 433.953923ms: waiting for machine to come up
	I0708 20:56:15.773711   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:15.776947   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777368   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:15.777400   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777704   59655 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:15.782466   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:15.796924   59655 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:15.797072   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:15.797138   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:15.841838   59655 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:15.841922   59655 ssh_runner.go:195] Run: which lz4
	I0708 20:56:15.846443   59655 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:56:15.851267   59655 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:56:15.851302   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:56:15.397039   59107 pod_ready.go:92] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:15.397070   59107 pod_ready.go:81] duration metric: took 6.008141421s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:15.397082   59107 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405606   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.405638   59107 pod_ready.go:81] duration metric: took 2.008547358s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405653   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411786   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.411810   59107 pod_ready.go:81] duration metric: took 6.14625ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411822   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421681   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.421712   59107 pod_ready.go:81] duration metric: took 2.009879259s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421725   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428235   59107 pod_ready.go:92] pod "kube-proxy-5h5xl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.428260   59107 pod_ready.go:81] duration metric: took 6.527896ms for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428269   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433130   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.433154   59107 pod_ready.go:81] duration metric: took 4.87807ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433163   59107 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:16.615670   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.616225   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.616257   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.616177   60369 retry.go:31] will retry after 489.658115ms: waiting for machine to come up
	I0708 20:56:17.107848   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.108391   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.108420   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.108341   60369 retry.go:31] will retry after 620.239043ms: waiting for machine to come up
	I0708 20:56:17.730239   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.730822   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.730854   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.730758   60369 retry.go:31] will retry after 818.379867ms: waiting for machine to come up
	I0708 20:56:18.550539   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:18.551024   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:18.551049   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:18.550993   60369 retry.go:31] will retry after 1.138596155s: waiting for machine to come up
	I0708 20:56:19.691669   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:19.692214   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:19.692267   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:19.692149   60369 retry.go:31] will retry after 1.467771065s: waiting for machine to come up
	I0708 20:56:21.161367   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:21.161916   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:21.161945   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:21.161854   60369 retry.go:31] will retry after 1.592022559s: waiting for machine to come up
	I0708 20:56:17.447251   59655 crio.go:462] duration metric: took 1.600850063s to copy over tarball
	I0708 20:56:17.447341   59655 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:56:19.773249   59655 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.325874804s)
	I0708 20:56:19.773277   59655 crio.go:469] duration metric: took 2.325993304s to extract the tarball
	I0708 20:56:19.773286   59655 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:19.811911   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:19.859029   59655 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:19.859060   59655 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:19.859070   59655 kubeadm.go:928] updating node { 192.168.72.163 8444 v1.30.2 crio true true} ...
	I0708 20:56:19.859208   59655 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-071971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:19.859281   59655 ssh_runner.go:195] Run: crio config
	I0708 20:56:19.905778   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:19.905806   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:19.905822   59655 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:19.905847   59655 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.163 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071971 NodeName:default-k8s-diff-port-071971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:19.906035   59655 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.163
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071971"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:19.906113   59655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:19.916307   59655 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:19.916388   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:19.926496   59655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0708 20:56:19.947778   59655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:19.969466   59655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0708 20:56:19.991103   59655 ssh_runner.go:195] Run: grep 192.168.72.163	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:19.995180   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:20.008005   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:20.143869   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:20.162694   59655 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971 for IP: 192.168.72.163
	I0708 20:56:20.162713   59655 certs.go:194] generating shared ca certs ...
	I0708 20:56:20.162745   59655 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:20.162930   59655 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:20.162986   59655 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:20.162997   59655 certs.go:256] generating profile certs ...
	I0708 20:56:20.163097   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.key
	I0708 20:56:20.163220   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key.17bd30e8
	I0708 20:56:20.163259   59655 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key
	I0708 20:56:20.163394   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:20.163478   59655 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:20.163493   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:20.163524   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:20.163558   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:20.163594   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:20.163659   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:20.164318   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:20.198987   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:20.251872   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:20.281444   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:20.305751   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0708 20:56:20.332608   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 20:56:20.365206   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:20.399631   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:20.430016   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:20.462126   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:20.492669   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:20.521867   59655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:20.540725   59655 ssh_runner.go:195] Run: openssl version
	I0708 20:56:20.546789   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:20.558515   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563342   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563430   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.570039   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:20.585367   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:20.601217   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605930   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605993   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.612015   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:20.623796   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:20.635305   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640571   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640649   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.648600   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:20.663899   59655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:20.669383   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:20.675967   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:20.682513   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:20.690280   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:20.698720   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:20.705678   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:20.712524   59655 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:20.712643   59655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:20.712700   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.761032   59655 cri.go:89] found id: ""
	I0708 20:56:20.761107   59655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:20.772712   59655 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:20.772736   59655 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:20.772742   59655 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:20.772793   59655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:20.784860   59655 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:20.785974   59655 kubeconfig.go:125] found "default-k8s-diff-port-071971" server: "https://192.168.72.163:8444"
	I0708 20:56:20.788290   59655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:20.799889   59655 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.163
	I0708 20:56:20.799919   59655 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:20.799947   59655 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:20.800011   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.846864   59655 cri.go:89] found id: ""
	I0708 20:56:20.846936   59655 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:20.865883   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:20.877476   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:20.877495   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:20.877548   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 20:56:20.889786   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:20.889853   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:20.902185   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 20:56:20.913510   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:20.913573   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:20.923964   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.934048   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:20.934131   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.945078   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 20:56:20.955290   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:20.955354   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:20.966182   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:20.977508   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.319213   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.511204   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:23.942367   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:22.755738   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:22.756182   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:22.756243   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:22.756167   60369 retry.go:31] will retry after 1.858003233s: waiting for machine to come up
	I0708 20:56:24.616152   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:24.616674   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:24.616703   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:24.616618   60369 retry.go:31] will retry after 2.203640369s: waiting for machine to come up
	I0708 20:56:22.471504   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.152252924s)
	I0708 20:56:22.471539   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.692407   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.756884   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.892773   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:22.892888   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.393789   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.893298   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.941073   59655 api_server.go:72] duration metric: took 1.048301169s to wait for apiserver process to appear ...
	I0708 20:56:23.941100   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:23.941131   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.221991   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:27.222029   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:27.222048   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:26.441670   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:28.939138   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:27.353017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.353069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.442130   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.447304   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.447326   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.941979   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.951850   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.951878   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.441380   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.452031   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.452069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.941613   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.946045   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.946084   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.441485   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.448847   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.448877   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.941906   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.946380   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.946416   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:30.442013   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:30.447291   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 20:56:30.454664   59655 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:30.454693   59655 api_server.go:131] duration metric: took 6.513586414s to wait for apiserver health ...
	I0708 20:56:30.454701   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:30.454707   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:30.456577   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:26.821665   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:26.822266   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:26.822297   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:26.822209   60369 retry.go:31] will retry after 3.478824168s: waiting for machine to come up
	I0708 20:56:30.302329   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:30.302766   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:30.302796   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:30.302707   60369 retry.go:31] will retry after 3.597512692s: waiting for machine to come up
	I0708 20:56:30.458168   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:30.469918   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:30.492348   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:30.503174   59655 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:30.503210   59655 system_pods.go:61] "coredns-7db6d8ff4d-c4tzw" [e5ea7dde-1134-45d0-b3e2-176e6a8f068e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:30.503218   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [693fd668-83c2-43e6-bf43-7b1a9e654db0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:30.503226   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [eadde33a-b967-4a58-9730-d163e6b8c0c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:30.503233   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [99bd8e55-e0a9-4071-a0f0-dc9d1e79b58d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:30.503238   59655 system_pods.go:61] "kube-proxy-vq4l8" [e2a4779c-e8ed-4f5b-872b-d10604936178] Running
	I0708 20:56:30.503244   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [af6b0a79-be1e-4caa-86a6-47ac782ac438] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:30.503250   59655 system_pods.go:61] "metrics-server-569cc877fc-h2dzd" [7075aa8e-0716-4965-8a13-3ed804190b3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:30.503257   59655 system_pods.go:61] "storage-provisioner" [9fca5ac9-cd65-4257-b012-20ded80a39a5] Running
	I0708 20:56:30.503265   59655 system_pods.go:74] duration metric: took 10.887672ms to wait for pod list to return data ...
	I0708 20:56:30.503279   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:30.509137   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:30.509170   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:30.509189   59655 node_conditions.go:105] duration metric: took 5.903588ms to run NodePressure ...
	I0708 20:56:30.509210   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:30.780430   59655 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788138   59655 kubeadm.go:733] kubelet initialised
	I0708 20:56:30.788168   59655 kubeadm.go:734] duration metric: took 7.711132ms waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788177   59655 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:30.797001   59655 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:30.939824   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:32.940860   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:34.941652   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:33.901849   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902332   58678 main.go:141] libmachine: (no-preload-028021) Found IP for machine: 192.168.39.108
	I0708 20:56:33.902356   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has current primary IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902361   58678 main.go:141] libmachine: (no-preload-028021) Reserving static IP address...
	I0708 20:56:33.902766   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.902797   58678 main.go:141] libmachine: (no-preload-028021) DBG | skip adding static IP to network mk-no-preload-028021 - found existing host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"}
	I0708 20:56:33.902808   58678 main.go:141] libmachine: (no-preload-028021) Reserved static IP address: 192.168.39.108
	I0708 20:56:33.902825   58678 main.go:141] libmachine: (no-preload-028021) Waiting for SSH to be available...
	I0708 20:56:33.902835   58678 main.go:141] libmachine: (no-preload-028021) DBG | Getting to WaitForSSH function...
	I0708 20:56:33.905031   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905318   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.905341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905479   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH client type: external
	I0708 20:56:33.905509   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa (-rw-------)
	I0708 20:56:33.905535   58678 main.go:141] libmachine: (no-preload-028021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:33.905560   58678 main.go:141] libmachine: (no-preload-028021) DBG | About to run SSH command:
	I0708 20:56:33.905573   58678 main.go:141] libmachine: (no-preload-028021) DBG | exit 0
	I0708 20:56:34.035510   58678 main.go:141] libmachine: (no-preload-028021) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:34.035876   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetConfigRaw
	I0708 20:56:34.036501   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.039070   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039467   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.039496   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039711   58678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/config.json ...
	I0708 20:56:34.039936   58678 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:34.039955   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:34.040191   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.042269   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042640   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.042666   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042793   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.042954   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043125   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043292   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.043496   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.043662   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.043671   58678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:34.156092   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:34.156143   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156412   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:56:34.156441   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156639   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.159015   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159420   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.159467   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159606   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.159817   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160015   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160214   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.160407   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.160572   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.160584   58678 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-028021 && echo "no-preload-028021" | sudo tee /etc/hostname
	I0708 20:56:34.286222   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-028021
	
	I0708 20:56:34.286250   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.289067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289376   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.289399   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.289832   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.289991   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.290129   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.290310   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.290471   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.290485   58678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-028021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-028021/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-028021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:34.414724   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:34.414749   58678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:34.414790   58678 buildroot.go:174] setting up certificates
	I0708 20:56:34.414799   58678 provision.go:84] configureAuth start
	I0708 20:56:34.414808   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.415115   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.417919   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418241   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.418273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418491   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.421129   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421603   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.421634   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421756   58678 provision.go:143] copyHostCerts
	I0708 20:56:34.421818   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:34.421839   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:34.421906   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:34.422023   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:34.422034   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:34.422064   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:34.422151   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:34.422161   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:34.422196   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:34.422276   58678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.no-preload-028021 san=[127.0.0.1 192.168.39.108 localhost minikube no-preload-028021]
	I0708 20:56:34.634189   58678 provision.go:177] copyRemoteCerts
	I0708 20:56:34.634253   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:34.634281   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.637123   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637364   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.637396   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637609   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.637912   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.638172   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.638410   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:34.726761   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:34.751947   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:56:34.776165   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:56:34.803849   58678 provision.go:87] duration metric: took 389.036659ms to configureAuth
	I0708 20:56:34.803880   58678 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:34.804125   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:34.804202   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.808559   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.808925   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.808966   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.809164   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.809416   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809572   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809710   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.809874   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.810069   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.810097   58678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:35.096796   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:35.096822   58678 machine.go:97] duration metric: took 1.056870853s to provisionDockerMachine
	I0708 20:56:35.096834   58678 start.go:293] postStartSetup for "no-preload-028021" (driver="kvm2")
	I0708 20:56:35.096847   58678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:35.096864   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.097227   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:35.097266   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.100040   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100428   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.100449   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100637   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.100826   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.100967   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.101128   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.187796   58678 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:35.192221   58678 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:35.192248   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:35.192315   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:35.192383   58678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:35.192467   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:35.204227   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:35.230404   58678 start.go:296] duration metric: took 133.555408ms for postStartSetup
	I0708 20:56:35.230446   58678 fix.go:56] duration metric: took 21.04954132s for fixHost
	I0708 20:56:35.230464   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.233341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233654   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.233685   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233878   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.234070   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234248   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234413   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.234611   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:35.234834   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:35.234849   58678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:35.348439   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472195.300246165
	
	I0708 20:56:35.348459   58678 fix.go:216] guest clock: 1720472195.300246165
	I0708 20:56:35.348468   58678 fix.go:229] Guest: 2024-07-08 20:56:35.300246165 +0000 UTC Remote: 2024-07-08 20:56:35.230449891 +0000 UTC m=+338.995803708 (delta=69.796274ms)
	I0708 20:56:35.348487   58678 fix.go:200] guest clock delta is within tolerance: 69.796274ms
	I0708 20:56:35.348492   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 21.167624903s
	I0708 20:56:35.348509   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.348752   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:35.351300   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351779   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.351806   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351977   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352557   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352725   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352799   58678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:35.352839   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.352942   58678 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:35.352969   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.355646   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356037   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356071   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356117   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356267   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356470   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.356555   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356580   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356642   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.356706   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356770   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.356885   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.357020   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.357154   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.438344   58678 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:35.470518   58678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:35.628022   58678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:35.636390   58678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:35.636468   58678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:35.654729   58678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:35.654753   58678 start.go:494] detecting cgroup driver to use...
	I0708 20:56:35.654824   58678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:35.678564   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:35.697122   58678 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:35.697202   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:35.713388   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:35.728254   58678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:35.874433   58678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:36.062521   58678 docker.go:233] disabling docker service ...
	I0708 20:56:36.062614   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:36.081225   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:36.096855   58678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:36.229455   58678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:36.375525   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:36.390772   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:36.411762   58678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:36.411905   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.423149   58678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:36.423218   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.434145   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.447568   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.458758   58678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:36.469393   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.479663   58678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.501298   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.512407   58678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:36.522400   58678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:36.522469   58678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:36.536310   58678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:36.547955   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:36.680408   58678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:36.860344   58678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:36.860416   58678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:36.866153   58678 start.go:562] Will wait 60s for crictl version
	I0708 20:56:36.866221   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:36.871623   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:36.917564   58678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:36.917655   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.954595   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.985788   58678 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:32.805051   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:35.303979   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.303556   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.303581   59655 pod_ready.go:81] duration metric: took 5.506548207s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.303590   59655 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308571   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.308596   59655 pod_ready.go:81] duration metric: took 4.998994ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308610   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314379   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.314402   59655 pod_ready.go:81] duration metric: took 5.784289ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314411   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.942775   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:39.440483   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.987568   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:36.990699   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991105   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:36.991146   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991378   58678 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:36.996102   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:37.012228   58678 kubeadm.go:877] updating cluster {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:37.012390   58678 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:37.012439   58678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:37.050690   58678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:37.050715   58678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 20:56:37.050765   58678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.050988   58678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.051005   58678 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.051146   58678 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.051199   58678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.051323   58678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.051396   58678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.051560   58678 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.052826   58678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.052840   58678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.052853   58678 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052910   58678 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.052742   58678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.052744   58678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.237714   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.238720   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.246932   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0708 20:56:37.253938   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.256152   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.264291   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.304685   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.316620   58678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0708 20:56:37.316664   58678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.316710   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.352464   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.387003   58678 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0708 20:56:37.387039   58678 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.387078   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513840   58678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0708 20:56:37.513886   58678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.513925   58678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0708 20:56:37.513938   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513958   58678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.513987   58678 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0708 20:56:37.514000   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514016   58678 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.514054   58678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0708 20:56:37.514097   58678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.514061   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514136   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514138   58678 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0708 20:56:37.514078   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.514159   58678 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.514191   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514224   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.535635   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.535678   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.596995   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0708 20:56:37.597092   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.597102   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.651190   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0708 20:56:37.651320   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:37.695843   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0708 20:56:37.695944   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0708 20:56:37.695995   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0708 20:56:37.696018   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:37.696020   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.696052   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:37.695849   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0708 20:56:37.696071   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.695875   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0708 20:56:37.696117   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:37.696211   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:37.721410   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0708 20:56:37.721453   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0708 20:56:37.721536   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0708 20:56:37.721644   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:39.890974   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.19489331s)
	I0708 20:56:39.891017   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0708 20:56:39.891070   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (2.194976871s)
	I0708 20:56:39.891096   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0708 20:56:39.891107   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.194875907s)
	I0708 20:56:39.891117   58678 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891120   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0708 20:56:39.891156   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2: (2.194966409s)
	I0708 20:56:39.891175   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891184   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0708 20:56:39.891196   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.169535432s)
	I0708 20:56:39.891212   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0708 20:56:37.824606   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.824634   59655 pod_ready.go:81] duration metric: took 1.510214968s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.824646   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829963   59655 pod_ready.go:92] pod "kube-proxy-vq4l8" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.829988   59655 pod_ready.go:81] duration metric: took 5.334688ms for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829997   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338575   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:38.338611   59655 pod_ready.go:81] duration metric: took 508.60515ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338625   59655 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:40.346498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.939773   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:43.941838   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.962256   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.071056184s)
	I0708 20:56:41.962281   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0708 20:56:41.962304   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:41.962349   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:44.325730   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (2.363358371s)
	I0708 20:56:44.325760   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0708 20:56:44.325789   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:44.325853   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:42.845177   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:44.846215   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.441086   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:48.939348   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.588882   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.263001s)
	I0708 20:56:46.588909   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0708 20:56:46.588931   58678 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:46.588980   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:50.590689   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.001689035s)
	I0708 20:56:50.590724   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0708 20:56:50.590758   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:50.590813   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:47.345179   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:49.346736   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:51.846003   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:50.940095   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:53.441346   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:52.446198   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (1.855362154s)
	I0708 20:56:52.446229   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0708 20:56:52.446247   58678 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:52.446284   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:53.400379   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0708 20:56:53.400419   58678 cache_images.go:123] Successfully loaded all cached images
	I0708 20:56:53.400424   58678 cache_images.go:92] duration metric: took 16.349697925s to LoadCachedImages
	I0708 20:56:53.400436   58678 kubeadm.go:928] updating node { 192.168.39.108 8443 v1.30.2 crio true true} ...
	I0708 20:56:53.400599   58678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-028021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:53.400692   58678 ssh_runner.go:195] Run: crio config
	I0708 20:56:53.452091   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:56:53.452117   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:53.452131   58678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:53.452150   58678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.108 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-028021 NodeName:no-preload-028021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:53.452285   58678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-028021"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:53.452344   58678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:53.464447   58678 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:53.464522   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:53.474930   58678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0708 20:56:53.493701   58678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:53.511491   58678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0708 20:56:53.530848   58678 ssh_runner.go:195] Run: grep 192.168.39.108	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:53.534931   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:53.547590   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:53.658960   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:53.677127   58678 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021 for IP: 192.168.39.108
	I0708 20:56:53.677151   58678 certs.go:194] generating shared ca certs ...
	I0708 20:56:53.677169   58678 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:53.677296   58678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:53.677330   58678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:53.677338   58678 certs.go:256] generating profile certs ...
	I0708 20:56:53.677420   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.key
	I0708 20:56:53.677471   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key.c3084b2b
	I0708 20:56:53.677511   58678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key
	I0708 20:56:53.677613   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:53.677639   58678 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:53.677645   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:53.677677   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:53.677752   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:53.677785   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:53.677825   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:53.680483   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:53.739386   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:53.770850   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:53.813958   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:53.850256   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 20:56:53.891539   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:53.921136   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:53.948966   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:53.977129   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:54.002324   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:54.028222   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:54.054099   58678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:54.073386   58678 ssh_runner.go:195] Run: openssl version
	I0708 20:56:54.079883   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:54.092980   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097451   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097503   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.103507   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:54.115123   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:54.126757   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131534   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131579   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.137333   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:54.148368   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:54.159628   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164230   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164280   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.170068   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:54.182152   58678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:54.187146   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:54.193425   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:54.200491   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:54.207006   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:54.213285   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:54.220313   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:54.227497   58678 kubeadm.go:391] StartCluster: {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:54.227597   58678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:54.227657   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.273025   58678 cri.go:89] found id: ""
	I0708 20:56:54.273094   58678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:54.284942   58678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:54.284965   58678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:54.284972   58678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:54.285023   58678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:54.296666   58678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:54.297740   58678 kubeconfig.go:125] found "no-preload-028021" server: "https://192.168.39.108:8443"
	I0708 20:56:54.299928   58678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:54.310186   58678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.108
	I0708 20:56:54.310224   58678 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:54.310235   58678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:54.310290   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.351640   58678 cri.go:89] found id: ""
	I0708 20:56:54.351709   58678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:54.370292   58678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:54.380551   58678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:54.380571   58678 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:54.380611   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:54.391462   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:54.391525   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:54.401804   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:54.411423   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:54.411501   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:54.422126   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.432236   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:54.432299   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.443001   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:54.454210   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:54.454271   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:54.465426   58678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:54.477714   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:54.593844   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.651092   58678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057214047s)
	I0708 20:56:55.651120   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.862342   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.952093   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:56.070164   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:56.070232   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:53.846869   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.847242   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.941645   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:58.440406   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:56.570644   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.071067   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.099879   58678 api_server.go:72] duration metric: took 1.02971362s to wait for apiserver process to appear ...
	I0708 20:56:57.099907   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:57.099932   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.102677   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.102805   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.102854   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.143035   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.143069   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.600694   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.605315   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:00.605349   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:01.100628   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.106209   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.106235   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:58.345619   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:00.346515   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:01.600656   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.605348   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.605381   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.101023   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.105457   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.105490   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.600058   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.604370   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.604397   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.100641   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.105655   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:03.105685   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.600193   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.604714   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 20:57:03.617761   58678 api_server.go:141] control plane version: v1.30.2
	I0708 20:57:03.617795   58678 api_server.go:131] duration metric: took 6.517881236s to wait for apiserver health ...
	I0708 20:57:03.617805   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:57:03.617811   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:57:03.619739   58678 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:57:00.940450   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.448484   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.621363   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:57:03.635846   58678 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:57:03.667045   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:57:03.686236   58678 system_pods.go:59] 8 kube-system pods found
	I0708 20:57:03.686308   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:57:03.686322   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:57:03.686334   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:57:03.686348   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:57:03.686354   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 20:57:03.686363   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:57:03.686371   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:57:03.686379   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 20:57:03.686390   58678 system_pods.go:74] duration metric: took 19.320099ms to wait for pod list to return data ...
	I0708 20:57:03.686402   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:57:03.696401   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:57:03.696436   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 20:57:03.696449   58678 node_conditions.go:105] duration metric: took 10.038061ms to run NodePressure ...
	I0708 20:57:03.696474   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:57:03.981698   58678 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987357   58678 kubeadm.go:733] kubelet initialised
	I0708 20:57:03.987379   58678 kubeadm.go:734] duration metric: took 5.653044ms waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987387   58678 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:03.993341   58678 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:03.999133   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999165   58678 pod_ready.go:81] duration metric: took 5.798521ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:03.999177   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999188   58678 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.004640   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004666   58678 pod_ready.go:81] duration metric: took 5.471219ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.004676   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004685   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.011313   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011342   58678 pod_ready.go:81] duration metric: took 6.65044ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.011354   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011364   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.071038   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071092   58678 pod_ready.go:81] duration metric: took 59.716762ms for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.071105   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071114   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.470702   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470732   58678 pod_ready.go:81] duration metric: took 399.6044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.470743   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470753   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.871002   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871036   58678 pod_ready.go:81] duration metric: took 400.275337ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.871045   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871052   58678 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:05.270858   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270883   58678 pod_ready.go:81] duration metric: took 399.822389ms for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:05.270892   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270899   58678 pod_ready.go:38] duration metric: took 1.283504929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:05.270914   58678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 20:57:05.284879   58678 ops.go:34] apiserver oom_adj: -16
	I0708 20:57:05.284900   58678 kubeadm.go:591] duration metric: took 10.999921787s to restartPrimaryControlPlane
	I0708 20:57:05.284912   58678 kubeadm.go:393] duration metric: took 11.057424996s to StartCluster
	I0708 20:57:05.284931   58678 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.285024   58678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:57:05.287297   58678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.287607   58678 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 20:57:05.287708   58678 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 20:57:05.287790   58678 addons.go:69] Setting storage-provisioner=true in profile "no-preload-028021"
	I0708 20:57:05.287807   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:57:05.287809   58678 addons.go:69] Setting default-storageclass=true in profile "no-preload-028021"
	I0708 20:57:05.287845   58678 addons.go:69] Setting metrics-server=true in profile "no-preload-028021"
	I0708 20:57:05.287900   58678 addons.go:234] Setting addon metrics-server=true in "no-preload-028021"
	W0708 20:57:05.287912   58678 addons.go:243] addon metrics-server should already be in state true
	I0708 20:57:05.287946   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.287854   58678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-028021"
	I0708 20:57:05.287825   58678 addons.go:234] Setting addon storage-provisioner=true in "no-preload-028021"
	W0708 20:57:05.288007   58678 addons.go:243] addon storage-provisioner should already be in state true
	I0708 20:57:05.288040   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.288276   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288308   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288380   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288382   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288430   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288413   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.289690   58678 out.go:177] * Verifying Kubernetes components...
	I0708 20:57:05.291336   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:57:05.310203   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0708 20:57:05.310610   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.311107   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.311129   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.311527   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.311990   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.312026   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.332966   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0708 20:57:05.332984   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0708 20:57:05.333056   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I0708 20:57:05.333449   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333466   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333497   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333994   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334014   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334138   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334146   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334158   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334163   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334347   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334514   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.334640   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334683   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334822   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.335285   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.335304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.337444   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.338763   58678 addons.go:234] Setting addon default-storageclass=true in "no-preload-028021"
	W0708 20:57:05.338785   58678 addons.go:243] addon default-storageclass should already be in state true
	I0708 20:57:05.338814   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.339217   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.339304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.339800   58678 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 20:57:05.341280   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 20:57:05.341303   58678 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 20:57:05.341327   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.344838   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345488   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.345504   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345683   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.345892   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.346146   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.346326   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.359060   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0708 20:57:05.359692   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.360186   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.360207   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.360545   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.361128   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.361173   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.361352   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0708 20:57:05.361971   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.362509   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.362525   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.362911   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.363148   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.364747   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.366914   58678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:57:05.368450   58678 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.368467   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 20:57:05.368483   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.372067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372368   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.372387   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372767   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.373030   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.373235   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.373389   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.379255   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0708 20:57:05.379732   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.380405   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.380428   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.380832   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.381039   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.382973   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.383191   58678 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.383211   58678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 20:57:05.383231   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.386273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386682   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.386705   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386997   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.387184   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.387336   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.387497   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.506081   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:57:05.525373   58678 node_ready.go:35] waiting up to 6m0s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:05.594638   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 20:57:05.594665   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 20:57:05.615378   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.620306   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 20:57:05.620331   58678 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 20:57:05.639840   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.692078   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:05.692109   58678 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 20:57:05.756364   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:06.822244   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206830336s)
	I0708 20:57:06.822310   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18243745s)
	I0708 20:57:06.822323   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822385   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065981271s)
	I0708 20:57:06.822418   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822432   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822390   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822351   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822504   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822850   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822870   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822879   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822886   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822955   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.822971   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822976   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822993   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822995   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823009   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823020   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823010   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823051   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823154   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823164   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823366   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.823380   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823390   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825436   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.825455   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825465   58678 addons.go:475] Verifying addon metrics-server=true in "no-preload-028021"
	I0708 20:57:06.830088   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.830108   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.830406   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.830420   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.830423   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.832322   58678 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0708 20:57:02.845629   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.353827   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.940469   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:08.439911   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:06.833974   58678 addons.go:510] duration metric: took 1.546270475s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0708 20:57:07.529328   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:09.529406   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:11.030134   58678 node_ready.go:49] node "no-preload-028021" has status "Ready":"True"
	I0708 20:57:11.030162   58678 node_ready.go:38] duration metric: took 5.504751555s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:11.030174   58678 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:11.035309   58678 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039750   58678 pod_ready.go:92] pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.039772   58678 pod_ready.go:81] duration metric: took 4.436756ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039783   58678 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044726   58678 pod_ready.go:92] pod "etcd-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.044748   58678 pod_ready.go:81] duration metric: took 4.958058ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044756   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049083   58678 pod_ready.go:92] pod "kube-apiserver-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.049104   58678 pod_ready.go:81] duration metric: took 4.34014ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049115   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:07.846290   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.344964   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.939618   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.445191   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.056307   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.056817   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.063838   58678 pod_ready.go:92] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.063864   58678 pod_ready.go:81] duration metric: took 5.014740635s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.063875   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082486   58678 pod_ready.go:92] pod "kube-proxy-6p6l6" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.082529   58678 pod_ready.go:81] duration metric: took 18.642044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082545   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092312   58678 pod_ready.go:92] pod "kube-scheduler-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.092337   58678 pod_ready.go:81] duration metric: took 9.783638ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092347   58678 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.353120   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:57:16.353203   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:57:16.355269   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:57:16.355317   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:57:16.355404   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:57:16.355558   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:57:16.355708   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:57:16.355815   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:57:16.358151   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:57:16.358312   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:57:16.358411   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:57:16.358531   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:57:16.358641   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:57:16.358732   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:57:16.358798   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:57:16.358893   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:57:16.359004   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:57:16.359128   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:57:16.359209   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:57:16.359288   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:57:16.359384   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:57:16.359509   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:57:16.359614   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:57:16.359725   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:57:16.359794   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:57:16.359881   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:57:16.359963   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:57:16.360002   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:57:16.360099   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:57:16.361960   57466 out.go:204]   - Booting up control plane ...
	I0708 20:57:16.362053   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:57:16.362196   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:57:16.362283   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:57:16.362402   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:57:16.362589   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:57:16.362819   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:57:16.362930   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363170   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363242   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363473   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363580   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363786   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363873   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364093   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364247   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364435   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364445   57466 kubeadm.go:309] 
	I0708 20:57:16.364476   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:57:16.364533   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:57:16.364541   57466 kubeadm.go:309] 
	I0708 20:57:16.364601   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:57:16.364636   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:57:16.364796   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:57:16.364820   57466 kubeadm.go:309] 
	I0708 20:57:16.364958   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:57:16.365016   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:57:16.365057   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:57:16.365063   57466 kubeadm.go:309] 
	I0708 20:57:16.365208   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:57:16.365339   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:57:16.365356   57466 kubeadm.go:309] 
	I0708 20:57:16.365490   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:57:16.365589   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:57:16.365694   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:57:16.365869   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:57:16.365969   57466 kubeadm.go:309] 
	I0708 20:57:16.365972   57466 kubeadm.go:393] duration metric: took 7m56.670441698s to StartCluster
	I0708 20:57:16.366023   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:57:16.366090   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:57:16.435868   57466 cri.go:89] found id: ""
	I0708 20:57:16.435896   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.435904   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:57:16.435910   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:57:16.435969   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:57:16.478844   57466 cri.go:89] found id: ""
	I0708 20:57:16.478881   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.478896   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:57:16.478904   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:57:16.478974   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:57:16.517414   57466 cri.go:89] found id: ""
	I0708 20:57:16.517439   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.517448   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:57:16.517455   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:57:16.517516   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:57:16.557036   57466 cri.go:89] found id: ""
	I0708 20:57:16.557063   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.557074   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:57:16.557081   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:57:16.557153   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:57:16.593604   57466 cri.go:89] found id: ""
	I0708 20:57:16.593631   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.593641   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:57:16.593648   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:57:16.593704   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:57:16.634143   57466 cri.go:89] found id: ""
	I0708 20:57:16.634173   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.634183   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:57:16.634190   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:57:16.634248   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:57:16.676553   57466 cri.go:89] found id: ""
	I0708 20:57:16.676585   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.676595   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:57:16.676602   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:57:16.676663   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:57:16.715652   57466 cri.go:89] found id: ""
	I0708 20:57:16.715674   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.715682   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:57:16.715692   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:57:16.715703   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:57:16.730747   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:57:16.730776   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:57:16.814950   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:57:16.814976   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:57:16.815005   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:57:16.921144   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:57:16.921194   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:57:16.973261   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:57:16.973294   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 20:57:17.031242   57466 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0708 20:57:17.031307   57466 out.go:239] * 
	W0708 20:57:17.031362   57466 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.031389   57466 out.go:239] * 
	W0708 20:57:17.032214   57466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:57:17.035847   57466 out.go:177] 
	W0708 20:57:17.037198   57466 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.037247   57466 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0708 20:57:17.037274   57466 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0708 20:57:17.039077   57466 out.go:177] 
	I0708 20:57:12.345241   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:14.346235   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.347467   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.940334   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:17.943302   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:18.102691   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:20.599066   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:18.847908   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:21.345112   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:20.441347   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:22.939786   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:24.940449   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:22.600192   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:25.100175   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:23.346438   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:25.845181   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.439923   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:29.940540   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.600010   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:30.099104   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.845456   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:29.845526   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.440285   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.939729   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.101616   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.598135   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.345268   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.844782   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.845440   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.940110   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:38.940964   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.600034   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:39.099711   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.100745   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:38.847223   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.344382   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.441047   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.939510   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.599982   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:46.101913   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.345029   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:45.345390   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:45.939787   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:47.940956   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:49.941949   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:48.598871   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:50.600154   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:47.346271   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:49.346661   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:51.844897   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:52.439646   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:54.440569   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:52.604096   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:55.103841   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:54.345832   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:56.845398   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:56.440640   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:58.939537   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:57.598505   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:00.098797   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:58.848087   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:01.346566   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:00.940434   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:03.439927   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:02.602188   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:05.100284   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:03.848841   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:06.346912   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:05.441676   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:07.942369   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:07.599099   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:09.601188   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:08.848926   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:11.346458   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:10.439620   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:12.440274   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:14.939694   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:12.098918   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:14.099419   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:13.844947   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:15.845203   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:16.940812   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:18.941307   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:16.599322   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:19.098815   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:21.100160   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:17.845975   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:20.347071   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:21.439802   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:23.441183   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:23.598459   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:26.098717   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:22.844674   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:24.845210   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:26.848564   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:25.939783   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:28.439490   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:28.099236   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:30.599130   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:29.344306   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:31.345070   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:30.439832   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:32.440229   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:34.441525   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:32.600143   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:35.100068   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:33.345938   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:35.845421   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:36.939642   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:38.941263   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:37.599587   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:40.099121   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:37.845529   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:40.345830   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:41.441175   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:43.941076   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:42.099418   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:44.101452   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:42.844426   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:44.846831   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:45.941732   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:48.440398   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:46.599328   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:48.600055   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:51.099949   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:47.347094   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:49.846223   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:50.940172   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:52.940229   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:54.941034   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:53.100619   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:55.599681   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:52.347726   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:54.845461   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:56.846142   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:56.941957   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.439408   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:57.600406   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.600450   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.344802   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:01.345852   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:01.939259   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:03.940182   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:02.101218   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:04.600651   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:03.845810   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:05.846170   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:05.940757   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:08.439635   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:07.100571   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:09.100718   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:08.344894   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:10.346744   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:10.440413   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:12.440882   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:14.940151   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:11.601260   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:13.603589   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:16.112928   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:12.848135   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:15.346591   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:17.440326   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:19.440421   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:18.598791   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:20.600589   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:17.845413   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:19.849057   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:21.941414   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:24.441214   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:23.100854   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:25.599374   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:22.346925   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:24.845239   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:26.941311   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:28.948332   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:28.100928   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:30.600465   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:27.345835   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:29.846655   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:31.848193   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:31.440572   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:33.939354   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:33.100068   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:35.601159   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:34.345252   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:36.346479   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:35.939843   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:37.941381   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:38.100393   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.102157   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:38.844435   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.845328   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.438849   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:42.441256   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:44.442877   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:42.601119   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:45.101132   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:43.345149   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:45.345522   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:46.940287   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:48.941589   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:47.101717   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:49.598367   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:47.846030   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:49.846247   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:51.438745   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:53.441587   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:51.599309   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:54.105369   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:56.110085   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:52.347026   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:54.845971   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:55.939702   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:57.940731   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:58.598821   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:00.599435   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:57.345043   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:59.346796   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:01.347030   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:00.439467   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:02.443994   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:04.941721   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:02.599994   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:05.098379   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:03.845802   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:05.846016   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:07.439561   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:09.440326   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:07.099339   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:09.599746   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:08.345432   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:10.347888   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:11.940331   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:13.940496   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:12.100751   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:14.597860   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:12.349653   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:14.846452   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:16.440554   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:18.441219   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:19.434076   59107 pod_ready.go:81] duration metric: took 4m0.000896796s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	E0708 21:00:19.434112   59107 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0708 21:00:19.434131   59107 pod_ready.go:38] duration metric: took 4m10.050938227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:00:19.434157   59107 kubeadm.go:591] duration metric: took 4m18.183643708s to restartPrimaryControlPlane
	W0708 21:00:19.434219   59107 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 21:00:19.434258   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:00:16.598896   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:18.598974   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:20.599027   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:17.345157   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:19.345498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:21.346939   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:22.599140   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:24.600455   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:23.347325   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:25.846384   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:27.104536   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:29.598836   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:27.847635   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:30.345065   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:31.600246   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:34.099964   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:32.348256   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:34.846942   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:36.598075   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:38.599175   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:40.599720   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:37.345319   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:38.339580   59655 pod_ready.go:81] duration metric: took 4m0.000925316s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	E0708 21:00:38.339615   59655 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0708 21:00:38.339635   59655 pod_ready.go:38] duration metric: took 4m7.551446129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:00:38.339667   59655 kubeadm.go:591] duration metric: took 4m17.566917749s to restartPrimaryControlPlane
	W0708 21:00:38.339731   59655 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 21:00:38.339763   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:00:43.101768   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:45.102321   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:47.599770   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:50.100703   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:51.419295   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.985013246s)
	I0708 21:00:51.419373   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:00:51.438876   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:00:51.451558   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:00:51.463932   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:00:51.463959   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 21:00:51.464013   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 21:00:51.476729   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:00:51.476791   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:00:51.488357   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 21:00:51.499650   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:00:51.499720   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:00:51.510559   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 21:00:51.522747   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:00:51.522821   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:00:51.534156   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 21:00:51.545057   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:00:51.545123   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:00:51.556712   59107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:00:51.766960   59107 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:00:52.599619   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:55.102565   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:01.185862   59107 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:01:01.185936   59107 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:01:01.186061   59107 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:01:01.186246   59107 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:01:01.186375   59107 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:01:01.186477   59107 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:01:01.188387   59107 out.go:204]   - Generating certificates and keys ...
	I0708 21:01:01.188489   59107 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:01:01.188575   59107 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:01:01.188655   59107 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:01:01.188754   59107 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:01:01.188856   59107 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:01:01.188937   59107 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:01:01.189015   59107 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:01:01.189107   59107 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:01:01.189216   59107 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:01:01.189326   59107 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:01:01.189381   59107 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:01:01.189445   59107 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:01:01.189504   59107 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:01:01.189571   59107 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:01:01.189636   59107 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:01:01.189732   59107 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:01:01.189822   59107 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:01:01.189939   59107 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:01:01.190019   59107 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:01:01.192426   59107 out.go:204]   - Booting up control plane ...
	I0708 21:01:01.192527   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:01:01.192598   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:01:01.192674   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:01:01.192795   59107 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:01:01.192892   59107 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:01:01.192949   59107 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:01:01.193078   59107 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:01:01.193150   59107 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:01:01.193204   59107 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001227366s
	I0708 21:01:01.193274   59107 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:01:01.193329   59107 kubeadm.go:309] [api-check] The API server is healthy after 5.506719576s
	I0708 21:01:01.193428   59107 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 21:01:01.193574   59107 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 21:01:01.193655   59107 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 21:01:01.193854   59107 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-239931 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 21:01:01.193936   59107 kubeadm.go:309] [bootstrap-token] Using token: uu1yg0.6mx8u39sjlxfysca
	I0708 21:01:01.196508   59107 out.go:204]   - Configuring RBAC rules ...
	I0708 21:01:01.196638   59107 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 21:01:01.196748   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 21:01:01.196867   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 21:01:01.196978   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 21:01:01.197141   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 21:01:01.197217   59107 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 21:01:01.197316   59107 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 21:01:01.197355   59107 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 21:01:01.197397   59107 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 21:01:01.197403   59107 kubeadm.go:309] 
	I0708 21:01:01.197451   59107 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 21:01:01.197457   59107 kubeadm.go:309] 
	I0708 21:01:01.197542   59107 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 21:01:01.197555   59107 kubeadm.go:309] 
	I0708 21:01:01.197597   59107 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 21:01:01.197673   59107 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 21:01:01.197748   59107 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 21:01:01.197761   59107 kubeadm.go:309] 
	I0708 21:01:01.197850   59107 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 21:01:01.197860   59107 kubeadm.go:309] 
	I0708 21:01:01.197903   59107 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 21:01:01.197912   59107 kubeadm.go:309] 
	I0708 21:01:01.197971   59107 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 21:01:01.198059   59107 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 21:01:01.198155   59107 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 21:01:01.198165   59107 kubeadm.go:309] 
	I0708 21:01:01.198279   59107 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 21:01:01.198389   59107 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 21:01:01.198400   59107 kubeadm.go:309] 
	I0708 21:01:01.198515   59107 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token uu1yg0.6mx8u39sjlxfysca \
	I0708 21:01:01.198663   59107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 21:01:01.198697   59107 kubeadm.go:309] 	--control-plane 
	I0708 21:01:01.198706   59107 kubeadm.go:309] 
	I0708 21:01:01.198821   59107 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 21:01:01.198830   59107 kubeadm.go:309] 
	I0708 21:01:01.198942   59107 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token uu1yg0.6mx8u39sjlxfysca \
	I0708 21:01:01.199078   59107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 21:01:01.199095   59107 cni.go:84] Creating CNI manager for ""
	I0708 21:01:01.199104   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:01:01.201409   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 21:00:57.600428   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:00.101501   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:01.202540   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 21:01:01.214691   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 21:01:01.238039   59107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 21:01:01.238180   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:01.238204   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-239931 minikube.k8s.io/updated_at=2024_07_08T21_01_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=embed-certs-239931 minikube.k8s.io/primary=true
	I0708 21:01:01.255228   59107 ops.go:34] apiserver oom_adj: -16
	I0708 21:01:01.441736   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:01.942570   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.442775   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.941941   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:03.441910   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:03.942762   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:04.442791   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:04.942122   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.600102   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:04.601357   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:05.442031   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:05.942414   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:06.442353   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:06.942075   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:07.442007   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:07.941952   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:08.442578   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:08.942110   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:09.442438   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:09.942436   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:10.666697   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.326909913s)
	I0708 21:01:10.666766   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:10.684044   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:01:10.695291   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:01:10.705771   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:01:10.705790   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 21:01:10.705829   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 21:01:10.717858   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:01:10.717911   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:01:10.728721   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 21:01:10.738917   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:01:10.738985   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:01:10.749795   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 21:01:10.760976   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:01:10.761036   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:01:10.771625   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 21:01:10.781677   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:01:10.781738   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:01:10.791622   59655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:01:10.855152   59655 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:01:10.855246   59655 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:01:11.027005   59655 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:01:11.027132   59655 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:01:11.027245   59655 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:01:11.262898   59655 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:01:07.098267   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:09.099083   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:11.099398   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:11.264777   59655 out.go:204]   - Generating certificates and keys ...
	I0708 21:01:11.264897   59655 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:01:11.265011   59655 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:01:11.265143   59655 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:01:11.265245   59655 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:01:11.265331   59655 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:01:11.265412   59655 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:01:11.265516   59655 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:01:11.265601   59655 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:01:11.265692   59655 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:01:11.265806   59655 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:01:11.265883   59655 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:01:11.265979   59655 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:01:11.307094   59655 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:01:11.410219   59655 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:01:11.840751   59655 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:01:12.163906   59655 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:01:12.260797   59655 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:01:12.261513   59655 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:01:12.264128   59655 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:01:12.266095   59655 out.go:204]   - Booting up control plane ...
	I0708 21:01:12.266212   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:01:12.266301   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:01:12.267540   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:01:12.290823   59655 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:01:12.291578   59655 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:01:12.291693   59655 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:01:10.442308   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:10.942270   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:11.442233   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:11.942533   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:12.442040   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:12.942629   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:13.441853   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:13.565655   59107 kubeadm.go:1107] duration metric: took 12.327535547s to wait for elevateKubeSystemPrivileges
	W0708 21:01:13.565704   59107 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 21:01:13.565714   59107 kubeadm.go:393] duration metric: took 5m12.375759038s to StartCluster
	I0708 21:01:13.565736   59107 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:13.565845   59107 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:01:13.568610   59107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:13.568940   59107 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:01:13.568980   59107 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 21:01:13.569061   59107 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-239931"
	I0708 21:01:13.569098   59107 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-239931"
	W0708 21:01:13.569113   59107 addons.go:243] addon storage-provisioner should already be in state true
	I0708 21:01:13.569136   59107 addons.go:69] Setting metrics-server=true in profile "embed-certs-239931"
	I0708 21:01:13.569098   59107 addons.go:69] Setting default-storageclass=true in profile "embed-certs-239931"
	I0708 21:01:13.569169   59107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-239931"
	I0708 21:01:13.569178   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:01:13.569149   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.569185   59107 addons.go:234] Setting addon metrics-server=true in "embed-certs-239931"
	W0708 21:01:13.569244   59107 addons.go:243] addon metrics-server should already be in state true
	I0708 21:01:13.569274   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.569617   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569639   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569648   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.569671   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.569673   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569698   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.570670   59107 out.go:177] * Verifying Kubernetes components...
	I0708 21:01:13.572338   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:01:13.590692   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40615
	I0708 21:01:13.590708   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0708 21:01:13.590701   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0708 21:01:13.591271   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591375   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591622   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591792   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.591806   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.591888   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.591909   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.592348   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.592368   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.592387   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.592422   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.592655   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.593065   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.593092   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.593568   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.594139   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.594196   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.596834   59107 addons.go:234] Setting addon default-storageclass=true in "embed-certs-239931"
	W0708 21:01:13.596857   59107 addons.go:243] addon default-storageclass should already be in state true
	I0708 21:01:13.596892   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.597258   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.597278   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.615398   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0708 21:01:13.616090   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.617374   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.617395   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.617542   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37809
	I0708 21:01:13.618025   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.618066   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.618450   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.618538   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.618563   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.618953   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.619151   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.621015   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.622114   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I0708 21:01:13.622533   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.623046   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.623071   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.623346   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.623757   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.624750   59107 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 21:01:13.625744   59107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:01:13.626604   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 21:01:13.626626   59107 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 21:01:13.626650   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.627717   59107 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:13.627737   59107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 21:01:13.627756   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.628207   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.628245   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.631548   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.633692   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.633737   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.634732   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.634960   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.635186   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.635262   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.635282   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.635415   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.635581   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.635946   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.636122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.636282   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.636468   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.650948   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0708 21:01:13.651543   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.652143   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.652165   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.652659   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.652835   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.654717   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.654971   59107 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:13.654988   59107 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 21:01:13.655006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.658670   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.659361   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.659475   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.659800   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.660109   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.660275   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.660406   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.813860   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:01:13.832841   59107 node_ready.go:35] waiting up to 6m0s for node "embed-certs-239931" to be "Ready" ...
	I0708 21:01:13.842398   59107 node_ready.go:49] node "embed-certs-239931" has status "Ready":"True"
	I0708 21:01:13.842420   59107 node_ready.go:38] duration metric: took 9.540746ms for node "embed-certs-239931" to be "Ready" ...
	I0708 21:01:13.842430   59107 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:13.853426   59107 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.861421   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.861451   59107 pod_ready.go:81] duration metric: took 7.991733ms for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.861466   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.873198   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.873228   59107 pod_ready.go:81] duration metric: took 11.754017ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.873243   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.882509   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.882560   59107 pod_ready.go:81] duration metric: took 9.307056ms for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.882574   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.890814   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.890843   59107 pod_ready.go:81] duration metric: took 8.26049ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.890854   59107 pod_ready.go:38] duration metric: took 48.414688ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:13.890872   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:13.890934   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:13.913170   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 21:01:13.913199   59107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 21:01:13.936334   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:13.942642   59107 api_server.go:72] duration metric: took 373.624334ms to wait for apiserver process to appear ...
	I0708 21:01:13.942673   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:13.942696   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 21:01:13.947241   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 21:01:13.948330   59107 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:13.948354   59107 api_server.go:131] duration metric: took 5.673644ms to wait for apiserver health ...
	I0708 21:01:13.948364   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:13.968333   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:13.999888   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 21:01:13.999920   59107 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 21:01:14.072446   59107 system_pods.go:59] 5 kube-system pods found
	I0708 21:01:14.072553   59107 system_pods.go:61] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.072575   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.072594   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.072608   59107 system_pods.go:61] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending
	I0708 21:01:14.072621   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.072637   59107 system_pods.go:74] duration metric: took 124.266452ms to wait for pod list to return data ...
	I0708 21:01:14.072663   59107 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:14.111310   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:14.111337   59107 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 21:01:14.196596   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:14.248043   59107 default_sa.go:45] found service account: "default"
	I0708 21:01:14.248075   59107 default_sa.go:55] duration metric: took 175.396297ms for default service account to be created ...
	I0708 21:01:14.248086   59107 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:14.381129   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.381166   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.381490   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.381507   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.381517   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.381525   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.383203   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:14.383213   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.383229   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.430533   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.430558   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.430835   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:14.431498   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.431558   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.440088   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.440129   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.440140   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.440148   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.440156   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.440162   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.440171   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.440176   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.440199   59107 retry.go:31] will retry after 211.74015ms: missing components: kube-dns, kube-proxy
	I0708 21:01:14.660845   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.660901   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.660916   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.660928   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.660938   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.660946   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.660990   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.661002   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.661036   59107 retry.go:31] will retry after 318.627165ms: missing components: kube-dns, kube-proxy
	I0708 21:01:14.988296   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.988336   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.988348   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.988359   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.988369   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.988376   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.988388   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.988398   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.988425   59107 retry.go:31] will retry after 333.622066ms: missing components: kube-dns, kube-proxy
	I0708 21:01:15.024853   59107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.056470802s)
	I0708 21:01:15.024902   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.024914   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.025237   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.025264   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.025266   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.025279   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.025288   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.025550   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.025566   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.348381   59107 system_pods.go:86] 8 kube-system pods found
	I0708 21:01:15.348419   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.348430   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.348440   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:15.348448   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:15.348455   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:15.348464   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:15.348473   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:15.348483   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:15.348502   59107 retry.go:31] will retry after 415.910372ms: missing components: kube-dns, kube-proxy
	I0708 21:01:15.736384   59107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.539741133s)
	I0708 21:01:15.736440   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.736456   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.736743   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.736782   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.736763   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.736803   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.736851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.737097   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.737135   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.737148   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.737157   59107 addons.go:475] Verifying addon metrics-server=true in "embed-certs-239931"
	I0708 21:01:15.739025   59107 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0708 21:01:13.102963   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:15.601580   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:16.101049   58678 pod_ready.go:81] duration metric: took 4m0.00868677s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 21:01:16.101081   58678 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0708 21:01:16.101094   58678 pod_ready.go:38] duration metric: took 4m5.070908601s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:16.101112   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:16.101147   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:16.101210   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:16.175601   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:16.175631   58678 cri.go:89] found id: ""
	I0708 21:01:16.175642   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:16.175703   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.182938   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:16.183013   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:16.261385   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:16.261411   58678 cri.go:89] found id: ""
	I0708 21:01:16.261423   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:16.261483   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.266231   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:16.266310   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:15.741167   59107 addons.go:510] duration metric: took 2.172185316s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0708 21:01:15.890659   59107 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:15.890702   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.890713   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.890723   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:15.890731   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:15.890738   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:15.890745   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Running
	I0708 21:01:15.890751   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:15.890759   59107 system_pods.go:89] "metrics-server-569cc877fc-f2dkn" [1d3c3e8e-356d-40b9-8add-35eec096e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:15.890772   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:15.890790   59107 retry.go:31] will retry after 557.749423ms: missing components: kube-dns
	I0708 21:01:16.457046   59107 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:16.457093   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:16.457105   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:16.457114   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:16.457124   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:16.457131   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:16.457137   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Running
	I0708 21:01:16.457143   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:16.457153   59107 system_pods.go:89] "metrics-server-569cc877fc-f2dkn" [1d3c3e8e-356d-40b9-8add-35eec096e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:16.457173   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:16.457183   59107 system_pods.go:126] duration metric: took 2.209089992s to wait for k8s-apps to be running ...
	I0708 21:01:16.457196   59107 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:16.457251   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:16.474652   59107 system_svc.go:56] duration metric: took 17.443712ms WaitForService to wait for kubelet
	I0708 21:01:16.474691   59107 kubeadm.go:576] duration metric: took 2.905677883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:16.474715   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:16.478431   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:16.478456   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:16.478480   59107 node_conditions.go:105] duration metric: took 3.758433ms to run NodePressure ...
	I0708 21:01:16.478502   59107 start.go:240] waiting for startup goroutines ...
	I0708 21:01:16.478515   59107 start.go:245] waiting for cluster config update ...
	I0708 21:01:16.478529   59107 start.go:254] writing updated cluster config ...
	I0708 21:01:16.478860   59107 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:16.536046   59107 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:16.538131   59107 out.go:177] * Done! kubectl is now configured to use "embed-certs-239931" cluster and "default" namespace by default
	I0708 21:01:12.440116   59655 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:01:12.440237   59655 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:01:13.441567   59655 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001312349s
	I0708 21:01:13.441690   59655 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:01:18.943345   59655 kubeadm.go:309] [api-check] The API server is healthy after 5.501634999s
	I0708 21:01:18.963728   59655 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 21:01:18.980036   59655 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 21:01:19.028362   59655 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 21:01:19.028635   59655 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-071971 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 21:01:19.051700   59655 kubeadm.go:309] [bootstrap-token] Using token: guoi3f.tsy4dvdlokyfqa2b
	I0708 21:01:19.053224   59655 out.go:204]   - Configuring RBAC rules ...
	I0708 21:01:19.053323   59655 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 21:01:19.063058   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 21:01:19.077711   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 21:01:19.090415   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 21:01:19.095539   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 21:01:19.101465   59655 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 21:01:19.351634   59655 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 21:01:19.809053   59655 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 21:01:20.359069   59655 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 21:01:20.359125   59655 kubeadm.go:309] 
	I0708 21:01:20.359193   59655 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 21:01:20.359227   59655 kubeadm.go:309] 
	I0708 21:01:20.359368   59655 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 21:01:20.359379   59655 kubeadm.go:309] 
	I0708 21:01:20.359439   59655 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 21:01:20.359553   59655 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 21:01:20.359613   59655 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 21:01:20.359624   59655 kubeadm.go:309] 
	I0708 21:01:20.359686   59655 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 21:01:20.359694   59655 kubeadm.go:309] 
	I0708 21:01:20.359733   59655 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 21:01:20.359740   59655 kubeadm.go:309] 
	I0708 21:01:20.359787   59655 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 21:01:20.359899   59655 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 21:01:20.359994   59655 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 21:01:20.360003   59655 kubeadm.go:309] 
	I0708 21:01:20.360096   59655 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 21:01:20.360194   59655 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 21:01:20.360202   59655 kubeadm.go:309] 
	I0708 21:01:20.360311   59655 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token guoi3f.tsy4dvdlokyfqa2b \
	I0708 21:01:20.360468   59655 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 21:01:20.360507   59655 kubeadm.go:309] 	--control-plane 
	I0708 21:01:20.360516   59655 kubeadm.go:309] 
	I0708 21:01:20.360628   59655 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 21:01:20.360639   59655 kubeadm.go:309] 
	I0708 21:01:20.360765   59655 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token guoi3f.tsy4dvdlokyfqa2b \
	I0708 21:01:20.360891   59655 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 21:01:20.361857   59655 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:01:20.361894   59655 cni.go:84] Creating CNI manager for ""
	I0708 21:01:20.361910   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:01:20.363579   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 21:01:16.309299   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:16.309328   58678 cri.go:89] found id: ""
	I0708 21:01:16.309337   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:16.309403   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.314236   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:16.314320   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:16.371891   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:16.371919   58678 cri.go:89] found id: ""
	I0708 21:01:16.371937   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:16.372008   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.380409   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:16.380480   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:16.428411   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:16.428441   58678 cri.go:89] found id: ""
	I0708 21:01:16.428452   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:16.428514   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.433310   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:16.433390   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:16.474785   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:16.474807   58678 cri.go:89] found id: ""
	I0708 21:01:16.474816   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:16.474882   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.480849   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:16.480933   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:16.529115   58678 cri.go:89] found id: ""
	I0708 21:01:16.529136   58678 logs.go:276] 0 containers: []
	W0708 21:01:16.529146   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:16.529153   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:16.529222   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:16.576499   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:16.576519   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:16.576527   58678 cri.go:89] found id: ""
	I0708 21:01:16.576536   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:16.576584   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.581261   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.587704   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:16.587733   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:16.651329   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:16.651385   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:16.706341   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:16.706380   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:17.302518   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:17.302570   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:17.373619   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:17.373651   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:17.414687   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:17.414722   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:17.470462   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:17.470499   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:17.487151   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:17.487189   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:17.625611   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:17.625655   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:17.673291   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:17.673325   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:17.712222   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:17.712253   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:17.752635   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:17.752665   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:17.794056   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:17.794085   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:20.341805   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:20.362405   58678 api_server.go:72] duration metric: took 4m15.074761342s to wait for apiserver process to appear ...
	I0708 21:01:20.362430   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:20.362465   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:20.362523   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:20.409947   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:20.409974   58678 cri.go:89] found id: ""
	I0708 21:01:20.409983   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:20.410040   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.414415   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:20.414476   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:20.463162   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:20.463186   58678 cri.go:89] found id: ""
	I0708 21:01:20.463196   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:20.463263   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.468905   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:20.468986   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:20.514265   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:20.514291   58678 cri.go:89] found id: ""
	I0708 21:01:20.514299   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:20.514357   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.519003   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:20.519081   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:20.565097   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:20.565122   58678 cri.go:89] found id: ""
	I0708 21:01:20.565132   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:20.565190   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.569971   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:20.570048   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:20.614435   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:20.614459   58678 cri.go:89] found id: ""
	I0708 21:01:20.614469   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:20.614525   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.619745   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:20.619824   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:20.660213   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:20.660235   58678 cri.go:89] found id: ""
	I0708 21:01:20.660242   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:20.660292   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.664740   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:20.664822   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:20.710279   58678 cri.go:89] found id: ""
	I0708 21:01:20.710300   58678 logs.go:276] 0 containers: []
	W0708 21:01:20.710307   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:20.710312   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:20.710359   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:20.751880   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:20.751906   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:20.751910   58678 cri.go:89] found id: ""
	I0708 21:01:20.751917   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:20.752028   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.756530   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.760679   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:20.760705   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:20.800525   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:20.800556   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:20.845629   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:20.845666   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:20.364837   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 21:01:20.376977   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 21:01:20.400133   59655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 21:01:20.400241   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:20.400291   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-071971 minikube.k8s.io/updated_at=2024_07_08T21_01_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=default-k8s-diff-port-071971 minikube.k8s.io/primary=true
	I0708 21:01:20.597429   59655 ops.go:34] apiserver oom_adj: -16
	I0708 21:01:20.597490   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.098582   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.597812   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:22.097790   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.356988   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:21.357025   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:21.416130   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:21.416160   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:21.431831   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:21.431865   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:21.479568   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:21.479597   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:21.527937   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:21.527970   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:21.569569   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:21.569605   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:21.691646   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:21.691678   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:21.737949   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:21.737975   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:21.789038   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:21.789069   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:21.831677   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:21.831703   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:24.380502   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 21:01:24.385139   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 21:01:24.386116   58678 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:24.386137   58678 api_server.go:131] duration metric: took 4.023699983s to wait for apiserver health ...
	I0708 21:01:24.386146   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:24.386171   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:24.386225   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:24.423786   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:24.423809   58678 cri.go:89] found id: ""
	I0708 21:01:24.423816   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:24.423869   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.428385   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:24.428447   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:24.467186   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:24.467206   58678 cri.go:89] found id: ""
	I0708 21:01:24.467213   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:24.467269   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.472208   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:24.472273   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:24.511157   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:24.511188   58678 cri.go:89] found id: ""
	I0708 21:01:24.511199   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:24.511266   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.516077   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:24.516144   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:24.556095   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:24.556115   58678 cri.go:89] found id: ""
	I0708 21:01:24.556122   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:24.556171   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.560735   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:24.560795   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:24.602473   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:24.602498   58678 cri.go:89] found id: ""
	I0708 21:01:24.602508   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:24.602562   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.608926   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:24.609003   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:24.653230   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:24.653258   58678 cri.go:89] found id: ""
	I0708 21:01:24.653267   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:24.653327   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.657884   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:24.657954   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:24.700775   58678 cri.go:89] found id: ""
	I0708 21:01:24.700800   58678 logs.go:276] 0 containers: []
	W0708 21:01:24.700810   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:24.700817   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:24.700876   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:24.738593   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:24.738619   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:24.738625   58678 cri.go:89] found id: ""
	I0708 21:01:24.738633   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:24.738689   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.743324   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.747684   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:24.747709   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:24.800431   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:24.800467   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:24.910702   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:24.910738   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:24.967323   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:24.967355   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:25.012335   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:25.012367   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:25.393024   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:25.393064   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:25.449280   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:25.449315   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:25.488676   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:25.488703   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:25.503705   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:25.503734   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:25.551111   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:25.551155   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:25.598388   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:25.598425   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:25.642052   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:25.642087   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:25.680632   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:25.680665   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:22.597628   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:23.098128   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:23.597756   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:24.097555   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:24.598149   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:25.098149   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:25.598255   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:26.097514   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:26.598211   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:27.097610   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.229251   58678 system_pods.go:59] 8 kube-system pods found
	I0708 21:01:28.229286   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running
	I0708 21:01:28.229293   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running
	I0708 21:01:28.229298   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running
	I0708 21:01:28.229304   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running
	I0708 21:01:28.229308   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 21:01:28.229312   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running
	I0708 21:01:28.229321   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:28.229327   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 21:01:28.229337   58678 system_pods.go:74] duration metric: took 3.843183956s to wait for pod list to return data ...
	I0708 21:01:28.229347   58678 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:28.232297   58678 default_sa.go:45] found service account: "default"
	I0708 21:01:28.232323   58678 default_sa.go:55] duration metric: took 2.96709ms for default service account to be created ...
	I0708 21:01:28.232333   58678 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:28.240720   58678 system_pods.go:86] 8 kube-system pods found
	I0708 21:01:28.240750   58678 system_pods.go:89] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running
	I0708 21:01:28.240755   58678 system_pods.go:89] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running
	I0708 21:01:28.240760   58678 system_pods.go:89] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running
	I0708 21:01:28.240765   58678 system_pods.go:89] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running
	I0708 21:01:28.240770   58678 system_pods.go:89] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 21:01:28.240774   58678 system_pods.go:89] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running
	I0708 21:01:28.240781   58678 system_pods.go:89] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:28.240787   58678 system_pods.go:89] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 21:01:28.240794   58678 system_pods.go:126] duration metric: took 8.454141ms to wait for k8s-apps to be running ...
	I0708 21:01:28.240804   58678 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:28.240855   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:28.256600   58678 system_svc.go:56] duration metric: took 15.789082ms WaitForService to wait for kubelet
	I0708 21:01:28.256630   58678 kubeadm.go:576] duration metric: took 4m22.968988646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:28.256654   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:28.260384   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:28.260402   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:28.260412   58678 node_conditions.go:105] duration metric: took 3.753004ms to run NodePressure ...
	I0708 21:01:28.260422   58678 start.go:240] waiting for startup goroutines ...
	I0708 21:01:28.260429   58678 start.go:245] waiting for cluster config update ...
	I0708 21:01:28.260438   58678 start.go:254] writing updated cluster config ...
	I0708 21:01:28.260686   58678 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:28.311517   58678 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:28.313560   58678 out.go:177] * Done! kubectl is now configured to use "no-preload-028021" cluster and "default" namespace by default
	I0708 21:01:27.598457   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.098475   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.598380   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:29.097496   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:29.598229   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:30.097844   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:30.598323   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:31.097781   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:31.598085   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:32.098438   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:32.598450   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.098414   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.597823   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.688717   59655 kubeadm.go:1107] duration metric: took 13.288534329s to wait for elevateKubeSystemPrivileges
	W0708 21:01:33.688756   59655 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 21:01:33.688765   59655 kubeadm.go:393] duration metric: took 5m12.976251287s to StartCluster
	I0708 21:01:33.688782   59655 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:33.688874   59655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:01:33.690446   59655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:33.690691   59655 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:01:33.690814   59655 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 21:01:33.690875   59655 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071971"
	I0708 21:01:33.690893   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:01:33.690907   59655 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071971"
	I0708 21:01:33.690902   59655 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071971"
	W0708 21:01:33.690915   59655 addons.go:243] addon storage-provisioner should already be in state true
	I0708 21:01:33.690914   59655 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071971"
	I0708 21:01:33.690939   59655 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071971"
	I0708 21:01:33.690945   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.690957   59655 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071971"
	W0708 21:01:33.690968   59655 addons.go:243] addon metrics-server should already be in state true
	I0708 21:01:33.691002   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.691272   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691274   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691294   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.691299   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.691323   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691361   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.692506   59655 out.go:177] * Verifying Kubernetes components...
	I0708 21:01:33.694134   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:01:33.708343   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0708 21:01:33.708681   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0708 21:01:33.708849   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.709011   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.709402   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.709421   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.709559   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.709578   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.709795   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.709864   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.710365   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.710411   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.710417   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.710445   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.710809   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0708 21:01:33.711278   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.711858   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.711892   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.712294   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.712604   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.716565   59655 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071971"
	W0708 21:01:33.716590   59655 addons.go:243] addon default-storageclass should already be in state true
	I0708 21:01:33.716620   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.716990   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.717041   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.728113   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0708 21:01:33.728257   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0708 21:01:33.728694   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.728742   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.729182   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.729211   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.729331   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.729353   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.729605   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.729663   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.729781   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.729846   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.731832   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.731878   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.734021   59655 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:01:33.734026   59655 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 21:01:33.736062   59655 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:33.736094   59655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 21:01:33.736122   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.736174   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 21:01:33.736192   59655 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 21:01:33.736222   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.736793   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0708 21:01:33.737419   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.739820   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.739837   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.740075   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740272   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.740463   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.740484   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.740967   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.741060   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.741213   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.741225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.741279   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.741309   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.741438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.741596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.741587   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.741730   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.741820   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.758223   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0708 21:01:33.758739   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.759237   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.759254   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.759633   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.759909   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.761455   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.761644   59655 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:33.761656   59655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 21:01:33.761669   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.764245   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.764541   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.764563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.764701   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.764872   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.765022   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.765126   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.926862   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:01:33.980155   59655 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071971" to be "Ready" ...
	I0708 21:01:33.993505   59655 node_ready.go:49] node "default-k8s-diff-port-071971" has status "Ready":"True"
	I0708 21:01:33.993526   59655 node_ready.go:38] duration metric: took 13.344616ms for node "default-k8s-diff-port-071971" to be "Ready" ...
	I0708 21:01:33.993534   59655 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:34.001402   59655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:34.045900   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:34.058039   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 21:01:34.058059   59655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 21:01:34.102931   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:34.121513   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 21:01:34.121541   59655 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 21:01:34.190181   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:34.190208   59655 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 21:01:34.232200   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:35.071867   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.071888   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.071977   59655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.026035336s)
	I0708 21:01:35.072026   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.072044   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.072157   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.072192   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.072205   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.072212   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.073887   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.073887   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.073917   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.073989   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.074003   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.074013   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.073907   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.074111   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.074438   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.074461   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.146813   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.146840   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.147181   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.147201   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.337952   59655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.105709862s)
	I0708 21:01:35.338010   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.338023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.338415   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.338447   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.338461   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.338471   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.338484   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.338733   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.338751   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.338763   59655 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-071971"
	I0708 21:01:35.340678   59655 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0708 21:01:35.341902   59655 addons.go:510] duration metric: took 1.651084154s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0708 21:01:36.011439   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:37.008538   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.008567   59655 pod_ready.go:81] duration metric: took 3.0071384s for pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.008582   59655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.013291   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.013313   59655 pod_ready.go:81] duration metric: took 4.723566ms for pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.013326   59655 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.017974   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.017997   59655 pod_ready.go:81] duration metric: took 4.66297ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.018009   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.022526   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.022550   59655 pod_ready.go:81] duration metric: took 4.533312ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.022563   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.027009   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.027032   59655 pod_ready.go:81] duration metric: took 4.462202ms for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.027042   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l2mdd" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.406030   59655 pod_ready.go:92] pod "kube-proxy-l2mdd" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.406055   59655 pod_ready.go:81] duration metric: took 379.00677ms for pod "kube-proxy-l2mdd" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.406064   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.806120   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.806141   59655 pod_ready.go:81] duration metric: took 400.070718ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.806151   59655 pod_ready.go:38] duration metric: took 3.812606006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:37.806165   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:37.806214   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:37.822846   59655 api_server.go:72] duration metric: took 4.132126389s to wait for apiserver process to appear ...
	I0708 21:01:37.822872   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:37.822889   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 21:01:37.827017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 21:01:37.827906   59655 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:37.827930   59655 api_server.go:131] duration metric: took 5.051704ms to wait for apiserver health ...
	I0708 21:01:37.827938   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:38.010909   59655 system_pods.go:59] 9 kube-system pods found
	I0708 21:01:38.010937   59655 system_pods.go:61] "coredns-7db6d8ff4d-8msvk" [38c1e0eb-5eb4-4acb-a5ae-c72871884e3d] Running
	I0708 21:01:38.010942   59655 system_pods.go:61] "coredns-7db6d8ff4d-hq7zj" [ddb0f99d-a91d-4bb7-96e7-695b6101a601] Running
	I0708 21:01:38.010946   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [e3399214-404c-423e-9648-b4d920028a92] Running
	I0708 21:01:38.010949   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [7b726b49-c243-4126-b6d2-fc12abc9a042] Running
	I0708 21:01:38.010953   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [6a731125-daa4-4da1-b9e0-1206da592fde] Running
	I0708 21:01:38.010956   59655 system_pods.go:61] "kube-proxy-l2mdd" [b1d70ae2-ed86-49bd-8910-a12c5cd8091a] Running
	I0708 21:01:38.010959   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [dc238033-038e-49ec-ba48-392b0ec2f7bd] Running
	I0708 21:01:38.010965   59655 system_pods.go:61] "metrics-server-569cc877fc-k8vhl" [09f957f3-d76f-4f21-b9a6-e5b249d07e1e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:38.010970   59655 system_pods.go:61] "storage-provisioner" [805a8fdb-ed9e-4f80-a2c9-7d8a0155b228] Running
	I0708 21:01:38.010979   59655 system_pods.go:74] duration metric: took 183.034922ms to wait for pod list to return data ...
	I0708 21:01:38.010987   59655 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:38.205307   59655 default_sa.go:45] found service account: "default"
	I0708 21:01:38.205331   59655 default_sa.go:55] duration metric: took 194.338319ms for default service account to be created ...
	I0708 21:01:38.205340   59655 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:38.410958   59655 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:38.410988   59655 system_pods.go:89] "coredns-7db6d8ff4d-8msvk" [38c1e0eb-5eb4-4acb-a5ae-c72871884e3d] Running
	I0708 21:01:38.410995   59655 system_pods.go:89] "coredns-7db6d8ff4d-hq7zj" [ddb0f99d-a91d-4bb7-96e7-695b6101a601] Running
	I0708 21:01:38.411000   59655 system_pods.go:89] "etcd-default-k8s-diff-port-071971" [e3399214-404c-423e-9648-b4d920028a92] Running
	I0708 21:01:38.411005   59655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071971" [7b726b49-c243-4126-b6d2-fc12abc9a042] Running
	I0708 21:01:38.411009   59655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071971" [6a731125-daa4-4da1-b9e0-1206da592fde] Running
	I0708 21:01:38.411013   59655 system_pods.go:89] "kube-proxy-l2mdd" [b1d70ae2-ed86-49bd-8910-a12c5cd8091a] Running
	I0708 21:01:38.411017   59655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071971" [dc238033-038e-49ec-ba48-392b0ec2f7bd] Running
	I0708 21:01:38.411024   59655 system_pods.go:89] "metrics-server-569cc877fc-k8vhl" [09f957f3-d76f-4f21-b9a6-e5b249d07e1e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:38.411029   59655 system_pods.go:89] "storage-provisioner" [805a8fdb-ed9e-4f80-a2c9-7d8a0155b228] Running
	I0708 21:01:38.411040   59655 system_pods.go:126] duration metric: took 205.695019ms to wait for k8s-apps to be running ...
	I0708 21:01:38.411050   59655 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:38.411092   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:38.428218   59655 system_svc.go:56] duration metric: took 17.158999ms WaitForService to wait for kubelet
	I0708 21:01:38.428248   59655 kubeadm.go:576] duration metric: took 4.737530934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:38.428270   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:38.606369   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:38.606394   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:38.606404   59655 node_conditions.go:105] duration metric: took 178.130401ms to run NodePressure ...
	I0708 21:01:38.606415   59655 start.go:240] waiting for startup goroutines ...
	I0708 21:01:38.606423   59655 start.go:245] waiting for cluster config update ...
	I0708 21:01:38.606432   59655 start.go:254] writing updated cluster config ...
	I0708 21:01:38.606686   59655 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:38.657280   59655 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:38.659556   59655 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-071971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.784034672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473029784002582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64a5b677-7f66-4e00-b1b1-045655a8882b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.785119284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=163e6530-edf1-4335-b96b-312d4e3cab72 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.785173159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=163e6530-edf1-4335-b96b-312d4e3cab72 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.785509871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472252236650146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed8483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec44202be050bdf4100a4056a19c9b444c0320568f8702a9b253d5cc8df2f4,PodSandboxId:b77db01fb1c53435402ee97d563b2b45bffac06b26f3ee070fd81df84e7c5f02,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720472230092260337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b,},Annotations:map[string]string{io.kubernetes.container.hash: aa2ae0f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46,PodSandboxId:d9e968743a97793cde784e402f4baebd906ce873c157650203c43116c4a77e2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472229156041101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb6cr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1efedb-97f2-4bf0-a182-b8329b3bc6f1,},Annotations:map[string]string{io.kubernetes.container.hash: 93921204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b,PodSandboxId:65aaa2f6076bf5e061050e568401625c2540826b9913f8ff916c3b4665638fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720472221456007652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa04234-ad5a-4a24-b6
a5-152933bb12b9,},Annotations:map[string]string{io.kubernetes.container.hash: b2ab9584,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720472221436827037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed84
83,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919,PodSandboxId:325368b4e3b1a494eb13c5da624041bd17571bd421621f004f13602791fd3656,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472216670177822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf4da22a5727a780be32a5a7e7c4cdb,},Annotations:map[string]string{io.kuber
netes.container.hash: 9942d3a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a,PodSandboxId:a85e18e661f9441d351d5e36f2d09921a0be38e0bfd39009eb43cc0d8e7795b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472216695156121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381e23949c09eb6afe9825084993c3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4,PodSandboxId:c2c699d466b8db7053c9f17f7121b9f0e8525df66a21e105d8ccf229ced8690f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472216673692656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d55ef8ee96afe42a43026500a04e191,},Annotations:map[string]string{io.kubernetes.container.hash: d3bcb
c66,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06,PodSandboxId:c770e062d6dfe7ed846741a9b5bfd2cc5a9155cafa9f29146e2a409f7a8e4e14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472216653216479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36324e1aa77d8550081ad04dbe675433,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=163e6530-edf1-4335-b96b-312d4e3cab72 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.834225135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a8779b5-b519-421f-81f8-dccd994ac33c name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.834315126Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a8779b5-b519-421f-81f8-dccd994ac33c name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.838417173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f91bb45c-401c-42a8-848a-22249b261d8e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.838968634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473029838938504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f91bb45c-401c-42a8-848a-22249b261d8e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.839869355Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=964b225c-7ed2-4e05-a64f-43c4f51ec946 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.839919525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=964b225c-7ed2-4e05-a64f-43c4f51ec946 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.840128017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472252236650146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed8483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec44202be050bdf4100a4056a19c9b444c0320568f8702a9b253d5cc8df2f4,PodSandboxId:b77db01fb1c53435402ee97d563b2b45bffac06b26f3ee070fd81df84e7c5f02,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720472230092260337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b,},Annotations:map[string]string{io.kubernetes.container.hash: aa2ae0f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46,PodSandboxId:d9e968743a97793cde784e402f4baebd906ce873c157650203c43116c4a77e2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472229156041101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb6cr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1efedb-97f2-4bf0-a182-b8329b3bc6f1,},Annotations:map[string]string{io.kubernetes.container.hash: 93921204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b,PodSandboxId:65aaa2f6076bf5e061050e568401625c2540826b9913f8ff916c3b4665638fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720472221456007652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa04234-ad5a-4a24-b6
a5-152933bb12b9,},Annotations:map[string]string{io.kubernetes.container.hash: b2ab9584,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720472221436827037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed84
83,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919,PodSandboxId:325368b4e3b1a494eb13c5da624041bd17571bd421621f004f13602791fd3656,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472216670177822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf4da22a5727a780be32a5a7e7c4cdb,},Annotations:map[string]string{io.kuber
netes.container.hash: 9942d3a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a,PodSandboxId:a85e18e661f9441d351d5e36f2d09921a0be38e0bfd39009eb43cc0d8e7795b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472216695156121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381e23949c09eb6afe9825084993c3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4,PodSandboxId:c2c699d466b8db7053c9f17f7121b9f0e8525df66a21e105d8ccf229ced8690f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472216673692656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d55ef8ee96afe42a43026500a04e191,},Annotations:map[string]string{io.kubernetes.container.hash: d3bcb
c66,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06,PodSandboxId:c770e062d6dfe7ed846741a9b5bfd2cc5a9155cafa9f29146e2a409f7a8e4e14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472216653216479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36324e1aa77d8550081ad04dbe675433,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=964b225c-7ed2-4e05-a64f-43c4f51ec946 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.890209038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=712a232a-ceb6-4307-a16c-086c45f49960 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.890315440Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=712a232a-ceb6-4307-a16c-086c45f49960 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.891909115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=309e8267-2305-4ce2-852d-dc5a0d70e833 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.892826400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473029892780422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=309e8267-2305-4ce2-852d-dc5a0d70e833 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.893485139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11055fc9-5b44-4359-b012-308300421bae name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.893689511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11055fc9-5b44-4359-b012-308300421bae name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.893955346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472252236650146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed8483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec44202be050bdf4100a4056a19c9b444c0320568f8702a9b253d5cc8df2f4,PodSandboxId:b77db01fb1c53435402ee97d563b2b45bffac06b26f3ee070fd81df84e7c5f02,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720472230092260337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b,},Annotations:map[string]string{io.kubernetes.container.hash: aa2ae0f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46,PodSandboxId:d9e968743a97793cde784e402f4baebd906ce873c157650203c43116c4a77e2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472229156041101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb6cr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1efedb-97f2-4bf0-a182-b8329b3bc6f1,},Annotations:map[string]string{io.kubernetes.container.hash: 93921204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b,PodSandboxId:65aaa2f6076bf5e061050e568401625c2540826b9913f8ff916c3b4665638fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720472221456007652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa04234-ad5a-4a24-b6
a5-152933bb12b9,},Annotations:map[string]string{io.kubernetes.container.hash: b2ab9584,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720472221436827037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed84
83,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919,PodSandboxId:325368b4e3b1a494eb13c5da624041bd17571bd421621f004f13602791fd3656,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472216670177822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf4da22a5727a780be32a5a7e7c4cdb,},Annotations:map[string]string{io.kuber
netes.container.hash: 9942d3a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a,PodSandboxId:a85e18e661f9441d351d5e36f2d09921a0be38e0bfd39009eb43cc0d8e7795b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472216695156121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381e23949c09eb6afe9825084993c3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4,PodSandboxId:c2c699d466b8db7053c9f17f7121b9f0e8525df66a21e105d8ccf229ced8690f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472216673692656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d55ef8ee96afe42a43026500a04e191,},Annotations:map[string]string{io.kubernetes.container.hash: d3bcb
c66,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06,PodSandboxId:c770e062d6dfe7ed846741a9b5bfd2cc5a9155cafa9f29146e2a409f7a8e4e14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472216653216479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36324e1aa77d8550081ad04dbe675433,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11055fc9-5b44-4359-b012-308300421bae name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.939717488Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c88109a-4b3d-4df6-9888-978919fbb652 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.939806324Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c88109a-4b3d-4df6-9888-978919fbb652 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.941011350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c0f6962-39a7-4028-8a65-15d7f730b98e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.941987383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473029941963567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c0f6962-39a7-4028-8a65-15d7f730b98e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.942682102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8214d44-091e-445a-8cb7-1915054eb1d9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.942750980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8214d44-091e-445a-8cb7-1915054eb1d9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:29 no-preload-028021 crio[719]: time="2024-07-08 21:10:29.942938602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472252236650146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed8483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec44202be050bdf4100a4056a19c9b444c0320568f8702a9b253d5cc8df2f4,PodSandboxId:b77db01fb1c53435402ee97d563b2b45bffac06b26f3ee070fd81df84e7c5f02,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720472230092260337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b,},Annotations:map[string]string{io.kubernetes.container.hash: aa2ae0f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46,PodSandboxId:d9e968743a97793cde784e402f4baebd906ce873c157650203c43116c4a77e2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472229156041101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb6cr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1efedb-97f2-4bf0-a182-b8329b3bc6f1,},Annotations:map[string]string{io.kubernetes.container.hash: 93921204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b,PodSandboxId:65aaa2f6076bf5e061050e568401625c2540826b9913f8ff916c3b4665638fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720472221456007652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa04234-ad5a-4a24-b6
a5-152933bb12b9,},Annotations:map[string]string{io.kubernetes.container.hash: b2ab9584,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720472221436827037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed84
83,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919,PodSandboxId:325368b4e3b1a494eb13c5da624041bd17571bd421621f004f13602791fd3656,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472216670177822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf4da22a5727a780be32a5a7e7c4cdb,},Annotations:map[string]string{io.kuber
netes.container.hash: 9942d3a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a,PodSandboxId:a85e18e661f9441d351d5e36f2d09921a0be38e0bfd39009eb43cc0d8e7795b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472216695156121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381e23949c09eb6afe9825084993c3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4,PodSandboxId:c2c699d466b8db7053c9f17f7121b9f0e8525df66a21e105d8ccf229ced8690f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472216673692656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d55ef8ee96afe42a43026500a04e191,},Annotations:map[string]string{io.kubernetes.container.hash: d3bcb
c66,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06,PodSandboxId:c770e062d6dfe7ed846741a9b5bfd2cc5a9155cafa9f29146e2a409f7a8e4e14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472216653216479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36324e1aa77d8550081ad04dbe675433,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8214d44-091e-445a-8cb7-1915054eb1d9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fef16ca13964       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   62fbc1cf8e9ce       storage-provisioner
	baec44202be05       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b77db01fb1c53       busybox
	d36b82d801f16       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   d9e968743a977       coredns-7db6d8ff4d-bb6cr
	abef906794957       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                1                   65aaa2f6076bf       kube-proxy-6p6l6
	a08f999b554b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   62fbc1cf8e9ce       storage-provisioner
	7c6733c9e5040       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago      Running             kube-scheduler            1                   a85e18e661f94       kube-scheduler-no-preload-028021
	32bb552a97107       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      13 minutes ago      Running             kube-apiserver            1                   c2c699d466b8d       kube-apiserver-no-preload-028021
	3c78c8f11d8c3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   325368b4e3b1a       etcd-no-preload-028021
	2e901eb02d631       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      13 minutes ago      Running             kube-controller-manager   1                   c770e062d6dfe       kube-controller-manager-no-preload-028021
	
	
	==> coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53802 - 13940 "HINFO IN 4359606603896240805.7306614040164904022. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012443562s
	
	
	==> describe nodes <==
	Name:               no-preload-028021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-028021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=no-preload-028021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T20_47_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 20:47:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-028021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 21:10:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 21:07:43 +0000   Mon, 08 Jul 2024 20:47:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 21:07:43 +0000   Mon, 08 Jul 2024 20:47:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 21:07:43 +0000   Mon, 08 Jul 2024 20:47:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 21:07:43 +0000   Mon, 08 Jul 2024 20:57:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    no-preload-028021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b7b92ffb8b7447e9dbe49719c6af7c0
	  System UUID:                2b7b92ff-b8b7-447e-9dbe-49719c6af7c0
	  Boot ID:                    88f2572a-61d3-4bee-b6a2-51cd06d2f771
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-bb6cr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-no-preload-028021                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-no-preload-028021             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-no-preload-028021    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-6p6l6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-no-preload-028021             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-4kpfm              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-028021 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-028021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-028021 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-028021 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-028021 event: Registered Node no-preload-028021 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-028021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-028021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-028021 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-028021 event: Registered Node no-preload-028021 in Controller
	
	
	==> dmesg <==
	[Jul 8 20:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052845] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040103] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.819579] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.399465] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.625399] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.045115] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.068281] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079915] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.201510] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.140397] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.320915] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[ +16.982326] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.060419] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.120370] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[Jul 8 20:57] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.981451] systemd-fstab-generator[1979]: Ignoring "noauto" option for root device
	[  +1.682382] kauditd_printk_skb: 56 callbacks suppressed
	[  +7.538657] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] <==
	{"level":"info","ts":"2024-07-08T20:56:57.18442Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ad19b3444912fc40","local-member-id":"3b067627ba430497","added-peer-id":"3b067627ba430497","added-peer-peer-urls":["https://192.168.39.108:2380"]}
	{"level":"info","ts":"2024-07-08T20:56:57.184614Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ad19b3444912fc40","local-member-id":"3b067627ba430497","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T20:56:57.184655Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T20:56:57.183541Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T20:56:57.185813Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3b067627ba430497","initial-advertise-peer-urls":["https://192.168.39.108:2380"],"listen-peer-urls":["https://192.168.39.108:2380"],"advertise-client-urls":["https://192.168.39.108:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.108:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T20:56:57.185859Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T20:56:57.186013Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.108:2380"}
	{"level":"info","ts":"2024-07-08T20:56:57.186041Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.108:2380"}
	{"level":"info","ts":"2024-07-08T20:56:58.760115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T20:56:58.760229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T20:56:58.760297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 received MsgPreVoteResp from 3b067627ba430497 at term 2"}
	{"level":"info","ts":"2024-07-08T20:56:58.760334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 became candidate at term 3"}
	{"level":"info","ts":"2024-07-08T20:56:58.760416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 received MsgVoteResp from 3b067627ba430497 at term 3"}
	{"level":"info","ts":"2024-07-08T20:56:58.760454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 became leader at term 3"}
	{"level":"info","ts":"2024-07-08T20:56:58.760494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3b067627ba430497 elected leader 3b067627ba430497 at term 3"}
	{"level":"info","ts":"2024-07-08T20:56:58.772433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:56:58.773875Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3b067627ba430497","local-member-attributes":"{Name:no-preload-028021 ClientURLs:[https://192.168.39.108:2379]}","request-path":"/0/members/3b067627ba430497/attributes","cluster-id":"ad19b3444912fc40","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T20:56:58.774036Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:56:58.774203Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T20:56:58.774232Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T20:56:58.775915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T20:56:58.777625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.108:2379"}
	{"level":"info","ts":"2024-07-08T21:06:58.808606Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":800}
	{"level":"info","ts":"2024-07-08T21:06:58.820807Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":800,"took":"11.815513ms","hash":2016596171,"current-db-size-bytes":2535424,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2535424,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-07-08T21:06:58.820877Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2016596171,"revision":800,"compact-revision":-1}
	
	
	==> kernel <==
	 21:10:30 up 14 min,  0 users,  load average: 0.02, 0.11, 0.08
	Linux no-preload-028021 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] <==
	I0708 21:05:01.226503       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:07:00.229251       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:07:00.229447       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0708 21:07:01.229702       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:07:01.229825       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:07:01.229857       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:07:01.229925       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:07:01.230014       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:07:01.231198       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:08:01.230191       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:08:01.230273       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:08:01.230284       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:08:01.231513       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:08:01.231652       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:08:01.231662       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:10:01.231339       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:10:01.231448       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:10:01.231460       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:10:01.232600       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:10:01.232795       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:10:01.232831       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] <==
	I0708 21:04:45.214849       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:05:14.746721       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:05:15.226410       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:05:44.751713       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:05:45.235086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:06:14.756616       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:06:15.242904       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:06:44.762475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:06:45.251439       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:07:14.768344       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:07:15.259436       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:07:44.774102       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:07:45.267690       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0708 21:08:10.039812       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="638.822µs"
	E0708 21:08:14.779178       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:08:15.275641       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0708 21:08:25.035105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="70.548µs"
	E0708 21:08:44.785723       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:08:45.283211       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:09:14.791748       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:09:15.290981       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:09:44.798992       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:09:45.298506       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:10:14.804526       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:10:15.306931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] <==
	I0708 20:57:01.630925       1 server_linux.go:69] "Using iptables proxy"
	I0708 20:57:01.644319       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.108"]
	I0708 20:57:01.683651       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 20:57:01.683703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 20:57:01.683721       1 server_linux.go:165] "Using iptables Proxier"
	I0708 20:57:01.686657       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 20:57:01.686891       1 server.go:872] "Version info" version="v1.30.2"
	I0708 20:57:01.686922       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:57:01.688220       1 config.go:192] "Starting service config controller"
	I0708 20:57:01.688251       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 20:57:01.688276       1 config.go:101] "Starting endpoint slice config controller"
	I0708 20:57:01.688280       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 20:57:01.688858       1 config.go:319] "Starting node config controller"
	I0708 20:57:01.688893       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 20:57:01.789063       1 shared_informer.go:320] Caches are synced for node config
	I0708 20:57:01.789100       1 shared_informer.go:320] Caches are synced for service config
	I0708 20:57:01.789114       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] <==
	I0708 20:56:58.208533       1 serving.go:380] Generated self-signed cert in-memory
	W0708 20:57:00.121622       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 20:57:00.121819       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 20:57:00.121908       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 20:57:00.121934       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 20:57:00.213394       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 20:57:00.213603       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:57:00.225294       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 20:57:00.225406       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 20:57:00.225724       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 20:57:00.225825       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 20:57:00.325728       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 21:07:57 no-preload-028021 kubelet[1359]: E0708 21:07:57.039815    1359 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 08 21:07:57 no-preload-028021 kubelet[1359]: E0708 21:07:57.039965    1359 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 08 21:07:57 no-preload-028021 kubelet[1359]: E0708 21:07:57.040866    1359 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qc8kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-4kpfm_kube-system(c37f4622-163f-48bf-9bb4-5a20b88187ad): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 08 21:07:57 no-preload-028021 kubelet[1359]: E0708 21:07:57.041036    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:08:10 no-preload-028021 kubelet[1359]: E0708 21:08:10.019635    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:08:25 no-preload-028021 kubelet[1359]: E0708 21:08:25.020069    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:08:38 no-preload-028021 kubelet[1359]: E0708 21:08:38.020168    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:08:52 no-preload-028021 kubelet[1359]: E0708 21:08:52.019356    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:08:56 no-preload-028021 kubelet[1359]: E0708 21:08:56.037038    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:08:56 no-preload-028021 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:08:56 no-preload-028021 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:08:56 no-preload-028021 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:08:56 no-preload-028021 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:09:07 no-preload-028021 kubelet[1359]: E0708 21:09:07.020283    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:09:19 no-preload-028021 kubelet[1359]: E0708 21:09:19.020610    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:09:32 no-preload-028021 kubelet[1359]: E0708 21:09:32.020719    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:09:45 no-preload-028021 kubelet[1359]: E0708 21:09:45.019857    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:09:56 no-preload-028021 kubelet[1359]: E0708 21:09:56.035213    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:09:56 no-preload-028021 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:09:56 no-preload-028021 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:09:56 no-preload-028021 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:09:56 no-preload-028021 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:09:59 no-preload-028021 kubelet[1359]: E0708 21:09:59.019440    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:10:13 no-preload-028021 kubelet[1359]: E0708 21:10:13.019780    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:10:28 no-preload-028021 kubelet[1359]: E0708 21:10:28.020269    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	
	
	==> storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] <==
	I0708 20:57:32.337970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 20:57:32.348471       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 20:57:32.348649       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 20:57:49.749251       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 20:57:49.749425       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-028021_5f3b64ba-d14a-4614-82b4-eac6452feda0!
	I0708 20:57:49.750357       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac2159a8-00d0-402d-b75e-f4a46bc30629", APIVersion:"v1", ResourceVersion:"584", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-028021_5f3b64ba-d14a-4614-82b4-eac6452feda0 became leader
	I0708 20:57:49.849765       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-028021_5f3b64ba-d14a-4614-82b4-eac6452feda0!
	
	
	==> storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] <==
	I0708 20:57:01.582028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0708 20:57:31.586346       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-028021 -n no-preload-028021
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-028021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-4kpfm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-028021 describe pod metrics-server-569cc877fc-4kpfm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-028021 describe pod metrics-server-569cc877fc-4kpfm: exit status 1 (60.728729ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-4kpfm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-028021 describe pod metrics-server-569cc877fc-4kpfm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-08 21:10:39.214581809 +0000 UTC m=+6099.127459608
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-071971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-071971 logs -n 25: (1.551918631s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-897827                                        | pause-897827                 | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:46 UTC |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| ssh     | cert-options-059722 ssh                                | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-059722 -- sudo                         | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-059722                                 | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-028021             | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-914355             | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-239931            | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-733920 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-733920                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:50 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028021                  | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071971  | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-239931                 | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071971       | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC | 08 Jul 24 21:01 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:53:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:53:37.291760   59655 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:53:37.291847   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.291851   59655 out.go:304] Setting ErrFile to fd 2...
	I0708 20:53:37.291855   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.292047   59655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:53:37.292558   59655 out.go:298] Setting JSON to false
	I0708 20:53:37.293434   59655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5766,"bootTime":1720466251,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:53:37.293485   59655 start.go:139] virtualization: kvm guest
	I0708 20:53:37.296412   59655 out.go:177] * [default-k8s-diff-port-071971] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:53:37.297727   59655 notify.go:220] Checking for updates...
	I0708 20:53:37.297756   59655 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:53:37.299168   59655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:53:37.300541   59655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:53:37.301818   59655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:53:37.303117   59655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:53:37.304266   59655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:53:37.305793   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:53:37.306182   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.306236   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.321719   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I0708 20:53:37.322090   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.322593   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.322617   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.322908   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.323093   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.323329   59655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:53:37.323638   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.323679   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.338244   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0708 20:53:37.338660   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.339118   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.339144   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.339463   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.339735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.374356   59655 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:53:37.375714   59655 start.go:297] selected driver: kvm2
	I0708 20:53:37.375729   59655 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.375866   59655 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:53:37.376843   59655 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.376918   59655 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:53:37.391219   59655 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:53:37.391602   59655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:53:37.391659   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:53:37.391672   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:53:37.391707   59655 start.go:340] cluster config:
	{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.391797   59655 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.393453   59655 out.go:177] * Starting "default-k8s-diff-port-071971" primary control-plane node in "default-k8s-diff-port-071971" cluster
	I0708 20:53:37.923695   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:40.995762   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:37.394736   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:53:37.394768   59655 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 20:53:37.394777   59655 cache.go:56] Caching tarball of preloaded images
	I0708 20:53:37.394849   59655 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:53:37.394860   59655 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 20:53:37.394962   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:53:37.395154   59655 start.go:360] acquireMachinesLock for default-k8s-diff-port-071971: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:53:47.075721   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:50.147727   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:56.227766   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:59.299738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:05.379699   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:08.451749   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:14.531759   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:17.603688   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:23.683730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:26.755738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:32.835706   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:35.907702   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:41.987722   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:45.059873   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:51.139726   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:54.211797   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:00.291730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:03.363720   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:09.443741   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:12.515718   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.358315   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:55:19.358408   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:55:19.359948   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:55:19.360000   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:55:19.360076   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:55:19.360217   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:55:19.360354   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:55:19.360443   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:55:19.362594   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:55:19.362671   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:55:19.362761   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:55:19.362915   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:55:19.362997   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:55:19.363087   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:55:19.363181   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:55:19.363271   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:55:19.363360   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:55:19.363470   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:55:19.363582   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:55:19.363636   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:55:19.363711   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:55:19.363781   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:55:19.363852   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:55:19.363941   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:55:19.364010   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:55:19.364135   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:55:19.364226   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:55:19.364276   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:55:19.364342   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:55:18.595786   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.366132   57466 out.go:204]   - Booting up control plane ...
	I0708 20:55:19.366219   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:55:19.366301   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:55:19.366364   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:55:19.366433   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:55:19.366579   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:55:19.366629   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:55:19.366692   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.366846   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.366909   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367070   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367133   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367285   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367344   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367511   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367575   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367735   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367743   57466 kubeadm.go:309] 
	I0708 20:55:19.367783   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:55:19.367817   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:55:19.367823   57466 kubeadm.go:309] 
	I0708 20:55:19.367851   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:55:19.367888   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:55:19.367991   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:55:19.368009   57466 kubeadm.go:309] 
	I0708 20:55:19.368127   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:55:19.368164   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:55:19.368192   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:55:19.368198   57466 kubeadm.go:309] 
	I0708 20:55:19.368284   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:55:19.368355   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:55:19.368362   57466 kubeadm.go:309] 
	I0708 20:55:19.368455   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:55:19.368539   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:55:19.368606   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:55:19.368666   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:55:19.368673   57466 kubeadm.go:309] 
	W0708 20:55:19.368784   57466 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0708 20:55:19.368831   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 20:55:19.838778   57466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:55:19.853958   57466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:55:19.863986   57466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:55:19.864010   57466 kubeadm.go:156] found existing configuration files:
	
	I0708 20:55:19.864055   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:55:19.873085   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:55:19.873147   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:55:19.882654   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:55:19.891579   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:55:19.891634   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:55:19.901397   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.910901   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:55:19.910976   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.920599   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:55:19.929826   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:55:19.929891   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:55:19.939284   57466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 20:55:20.153136   57466 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 20:55:21.667700   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:27.747756   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:30.819712   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:33.824320   59107 start.go:364] duration metric: took 3m48.54985296s to acquireMachinesLock for "embed-certs-239931"
	I0708 20:55:33.824375   59107 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:33.824390   59107 fix.go:54] fixHost starting: 
	I0708 20:55:33.824700   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:33.824728   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:33.839554   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0708 20:55:33.839987   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:33.840472   59107 main.go:141] libmachine: Using API Version  1
	I0708 20:55:33.840495   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:33.840844   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:33.841030   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:33.841194   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 20:55:33.842597   59107 fix.go:112] recreateIfNeeded on embed-certs-239931: state=Stopped err=<nil>
	I0708 20:55:33.842627   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	W0708 20:55:33.842787   59107 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:33.844574   59107 out.go:177] * Restarting existing kvm2 VM for "embed-certs-239931" ...
	I0708 20:55:33.845674   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Start
	I0708 20:55:33.845858   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring networks are active...
	I0708 20:55:33.846607   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network default is active
	I0708 20:55:33.846907   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network mk-embed-certs-239931 is active
	I0708 20:55:33.847329   59107 main.go:141] libmachine: (embed-certs-239931) Getting domain xml...
	I0708 20:55:33.848120   59107 main.go:141] libmachine: (embed-certs-239931) Creating domain...
	I0708 20:55:35.057523   59107 main.go:141] libmachine: (embed-certs-239931) Waiting to get IP...
	I0708 20:55:35.058300   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.058841   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.058870   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.058773   60083 retry.go:31] will retry after 280.969113ms: waiting for machine to come up
	I0708 20:55:33.821580   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:33.821617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.821932   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:55:33.821957   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.822166   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:55:33.824193   58678 machine.go:97] duration metric: took 4m37.421469682s to provisionDockerMachine
	I0708 20:55:33.824234   58678 fix.go:56] duration metric: took 4m37.444794791s for fixHost
	I0708 20:55:33.824241   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 4m37.44481517s
	W0708 20:55:33.824262   58678 start.go:713] error starting host: provision: host is not running
	W0708 20:55:33.824343   58678 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0708 20:55:33.824352   58678 start.go:728] Will try again in 5 seconds ...
	I0708 20:55:35.341327   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.341861   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.341882   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.341837   60083 retry.go:31] will retry after 333.972717ms: waiting for machine to come up
	I0708 20:55:35.677531   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.678035   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.678066   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.677984   60083 retry.go:31] will retry after 387.46652ms: waiting for machine to come up
	I0708 20:55:36.066618   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.067079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.067106   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.067044   60083 retry.go:31] will retry after 523.369614ms: waiting for machine to come up
	I0708 20:55:36.591863   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.592337   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.592363   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.592295   60083 retry.go:31] will retry after 670.675561ms: waiting for machine to come up
	I0708 20:55:37.264084   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:37.264521   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:37.264565   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:37.264485   60083 retry.go:31] will retry after 775.348922ms: waiting for machine to come up
	I0708 20:55:38.041398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:38.041860   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:38.041885   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:38.041801   60083 retry.go:31] will retry after 1.135585711s: waiting for machine to come up
	I0708 20:55:39.179405   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:39.179910   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:39.179938   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:39.179867   60083 retry.go:31] will retry after 1.422689354s: waiting for machine to come up
	I0708 20:55:38.826037   58678 start.go:360] acquireMachinesLock for no-preload-028021: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:55:40.603811   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:40.604240   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:40.604261   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:40.604199   60083 retry.go:31] will retry after 1.640612147s: waiting for machine to come up
	I0708 20:55:42.247230   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:42.247797   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:42.247837   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:42.247733   60083 retry.go:31] will retry after 2.031069729s: waiting for machine to come up
	I0708 20:55:44.280878   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:44.281419   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:44.281451   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:44.281355   60083 retry.go:31] will retry after 2.394813785s: waiting for machine to come up
	I0708 20:55:46.678897   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:46.679398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:46.679430   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:46.679357   60083 retry.go:31] will retry after 2.419242459s: waiting for machine to come up
	I0708 20:55:49.100362   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:49.100901   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:49.100964   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:49.100858   60083 retry.go:31] will retry after 4.241202363s: waiting for machine to come up
	I0708 20:55:54.868873   59655 start.go:364] duration metric: took 2m17.473689428s to acquireMachinesLock for "default-k8s-diff-port-071971"
	I0708 20:55:54.868970   59655 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:54.868991   59655 fix.go:54] fixHost starting: 
	I0708 20:55:54.869400   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:54.869432   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:54.888853   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44159
	I0708 20:55:54.889234   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:54.889674   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:55:54.889698   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:54.890009   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:54.890196   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:55:54.890332   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 20:55:54.891932   59655 fix.go:112] recreateIfNeeded on default-k8s-diff-port-071971: state=Stopped err=<nil>
	I0708 20:55:54.891972   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	W0708 20:55:54.892120   59655 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:54.894439   59655 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071971" ...
	I0708 20:55:53.347154   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.347587   59107 main.go:141] libmachine: (embed-certs-239931) Found IP for machine: 192.168.61.126
	I0708 20:55:53.347601   59107 main.go:141] libmachine: (embed-certs-239931) Reserving static IP address...
	I0708 20:55:53.347612   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has current primary IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.348084   59107 main.go:141] libmachine: (embed-certs-239931) Reserved static IP address: 192.168.61.126
	I0708 20:55:53.348106   59107 main.go:141] libmachine: (embed-certs-239931) Waiting for SSH to be available...
	I0708 20:55:53.348119   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.348136   59107 main.go:141] libmachine: (embed-certs-239931) DBG | skip adding static IP to network mk-embed-certs-239931 - found existing host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"}
	I0708 20:55:53.348148   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Getting to WaitForSSH function...
	I0708 20:55:53.350167   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350545   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.350583   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350651   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH client type: external
	I0708 20:55:53.350675   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa (-rw-------)
	I0708 20:55:53.350704   59107 main.go:141] libmachine: (embed-certs-239931) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:55:53.350722   59107 main.go:141] libmachine: (embed-certs-239931) DBG | About to run SSH command:
	I0708 20:55:53.350736   59107 main.go:141] libmachine: (embed-certs-239931) DBG | exit 0
	I0708 20:55:53.479934   59107 main.go:141] libmachine: (embed-certs-239931) DBG | SSH cmd err, output: <nil>: 
	I0708 20:55:53.480309   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetConfigRaw
	I0708 20:55:53.480891   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.483079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483399   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.483424   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483740   59107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/config.json ...
	I0708 20:55:53.483920   59107 machine.go:94] provisionDockerMachine start ...
	I0708 20:55:53.483937   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:53.484172   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.486461   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486772   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.486793   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.487075   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487253   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487385   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.487556   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.487774   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.487786   59107 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:55:53.600047   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:55:53.600078   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600308   59107 buildroot.go:166] provisioning hostname "embed-certs-239931"
	I0708 20:55:53.600342   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600508   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.603107   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603498   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.603529   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603728   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.603954   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604345   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.604512   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.604716   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.604737   59107 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-239931 && echo "embed-certs-239931" | sudo tee /etc/hostname
	I0708 20:55:53.734414   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-239931
	
	I0708 20:55:53.734457   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.737117   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737473   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.737501   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737640   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.737852   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738184   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.738355   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.738536   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.738558   59107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-239931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-239931/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-239931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:55:53.860753   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:53.860781   59107 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:55:53.860799   59107 buildroot.go:174] setting up certificates
	I0708 20:55:53.860808   59107 provision.go:84] configureAuth start
	I0708 20:55:53.860816   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.861070   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.863652   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.863999   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.864018   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.864221   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.866207   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866480   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.866504   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866613   59107 provision.go:143] copyHostCerts
	I0708 20:55:53.866671   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:55:53.866680   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:55:53.866741   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:55:53.866837   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:55:53.866845   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:55:53.866868   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:55:53.866932   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:55:53.866939   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:55:53.866959   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:55:53.867017   59107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.embed-certs-239931 san=[127.0.0.1 192.168.61.126 embed-certs-239931 localhost minikube]
	I0708 20:55:54.171765   59107 provision.go:177] copyRemoteCerts
	I0708 20:55:54.171835   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:55:54.171859   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.174341   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174621   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.174650   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174762   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.174957   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.175129   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.175280   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.262051   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:55:54.287118   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:55:54.310071   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:55:54.337811   59107 provision.go:87] duration metric: took 476.990356ms to configureAuth
	I0708 20:55:54.337851   59107 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:55:54.338077   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:55:54.338147   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.340972   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341259   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.341296   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341423   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.341720   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.341870   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.342006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.342147   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.342332   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.342350   59107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:55:54.618752   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:55:54.618775   59107 machine.go:97] duration metric: took 1.134844127s to provisionDockerMachine
	I0708 20:55:54.618786   59107 start.go:293] postStartSetup for "embed-certs-239931" (driver="kvm2")
	I0708 20:55:54.618795   59107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:55:54.618823   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.619220   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:55:54.619249   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.621857   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622144   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.622168   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622348   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.622532   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.622703   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.622853   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.710096   59107 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:55:54.714437   59107 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:55:54.714458   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:55:54.714524   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:55:54.714597   59107 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:55:54.714679   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:55:54.724350   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:55:54.748078   59107 start.go:296] duration metric: took 129.264358ms for postStartSetup
	I0708 20:55:54.748120   59107 fix.go:56] duration metric: took 20.923736253s for fixHost
	I0708 20:55:54.748138   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.750818   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751200   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.751227   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751377   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.751611   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751763   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751879   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.752034   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.752240   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.752256   59107 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:55:54.868663   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472154.844724958
	
	I0708 20:55:54.868694   59107 fix.go:216] guest clock: 1720472154.844724958
	I0708 20:55:54.868706   59107 fix.go:229] Guest: 2024-07-08 20:55:54.844724958 +0000 UTC Remote: 2024-07-08 20:55:54.748123056 +0000 UTC m=+249.617599643 (delta=96.601902ms)
	I0708 20:55:54.868764   59107 fix.go:200] guest clock delta is within tolerance: 96.601902ms
	I0708 20:55:54.868776   59107 start.go:83] releasing machines lock for "embed-certs-239931", held for 21.044425411s
	I0708 20:55:54.868811   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.869092   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:54.871867   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872252   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.872295   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872451   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.872921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873060   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873151   59107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:55:54.873196   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.873271   59107 ssh_runner.go:195] Run: cat /version.json
	I0708 20:55:54.873297   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.876118   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876142   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876614   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876641   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876682   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876699   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.876903   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.877017   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877193   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877266   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877349   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.877424   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.984516   59107 ssh_runner.go:195] Run: systemctl --version
	I0708 20:55:54.990926   59107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:55:55.142010   59107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:55:55.148138   59107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:55:55.148203   59107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:55:55.164086   59107 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:55:55.164111   59107 start.go:494] detecting cgroup driver to use...
	I0708 20:55:55.164204   59107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:55:55.184836   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:55:55.204002   59107 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:55:55.204079   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:55:55.218405   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:55:55.233462   59107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:55:55.357278   59107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:55:55.521141   59107 docker.go:233] disabling docker service ...
	I0708 20:55:55.521218   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:55:55.538949   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:55:55.558613   59107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:55:55.696926   59107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:55:55.819810   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:55:55.837012   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:55:55.856417   59107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:55:55.856497   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.868488   59107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:55:55.868556   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.879503   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.891183   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.901872   59107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:55:55.914498   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.925676   59107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.944340   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.955961   59107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:55:55.965785   59107 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:55:55.965845   59107 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:55:55.979853   59107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:55:55.989331   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:55:56.108476   59107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:55:56.262396   59107 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:55:56.262463   59107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:55:56.267682   59107 start.go:562] Will wait 60s for crictl version
	I0708 20:55:56.267740   59107 ssh_runner.go:195] Run: which crictl
	I0708 20:55:56.273115   59107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:55:56.323276   59107 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:55:56.323364   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.352650   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.394502   59107 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:55:54.895951   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Start
	I0708 20:55:54.896150   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring networks are active...
	I0708 20:55:54.896971   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network default is active
	I0708 20:55:54.897344   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network mk-default-k8s-diff-port-071971 is active
	I0708 20:55:54.897672   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Getting domain xml...
	I0708 20:55:54.898340   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Creating domain...
	I0708 20:55:56.182187   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting to get IP...
	I0708 20:55:56.183209   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183699   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183759   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.183663   60221 retry.go:31] will retry after 255.382138ms: waiting for machine to come up
	I0708 20:55:56.441290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441760   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441789   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.441718   60221 retry.go:31] will retry after 363.116234ms: waiting for machine to come up
	I0708 20:55:56.806418   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806871   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806899   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.806819   60221 retry.go:31] will retry after 392.319836ms: waiting for machine to come up
	I0708 20:55:57.200645   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201176   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.201095   60221 retry.go:31] will retry after 528.490844ms: waiting for machine to come up
	I0708 20:55:56.395778   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:56.398458   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.398826   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:56.398853   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.399088   59107 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0708 20:55:56.403789   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:55:56.418081   59107 kubeadm.go:877] updating cluster {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:55:56.418244   59107 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:55:56.418312   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:55:56.459969   59107 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:55:56.460034   59107 ssh_runner.go:195] Run: which lz4
	I0708 20:55:56.464561   59107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:55:56.469087   59107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:55:56.469130   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:55:58.010716   59107 crio.go:462] duration metric: took 1.546186223s to copy over tarball
	I0708 20:55:58.010782   59107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:55:57.731640   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732172   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732223   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.732129   60221 retry.go:31] will retry after 554.611559ms: waiting for machine to come up
	I0708 20:55:58.287924   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288557   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.288491   60221 retry.go:31] will retry after 642.466107ms: waiting for machine to come up
	I0708 20:55:58.932485   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933032   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.932958   60221 retry.go:31] will retry after 999.83146ms: waiting for machine to come up
	I0708 20:55:59.934050   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934618   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934664   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:59.934571   60221 retry.go:31] will retry after 1.069868254s: waiting for machine to come up
	I0708 20:56:01.006049   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006594   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:01.006506   60221 retry.go:31] will retry after 1.182777891s: waiting for machine to come up
	I0708 20:56:02.191001   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191460   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191483   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:02.191418   60221 retry.go:31] will retry after 1.559742627s: waiting for machine to come up
	I0708 20:56:00.267199   59107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256392679s)
	I0708 20:56:00.267230   59107 crio.go:469] duration metric: took 2.256489175s to extract the tarball
	I0708 20:56:00.267240   59107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:00.305692   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:00.346669   59107 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:00.346694   59107 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:00.346703   59107 kubeadm.go:928] updating node { 192.168.61.126 8443 v1.30.2 crio true true} ...
	I0708 20:56:00.346804   59107 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-239931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:00.346868   59107 ssh_runner.go:195] Run: crio config
	I0708 20:56:00.392577   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:00.392597   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:00.392608   59107 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:00.392637   59107 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-239931 NodeName:embed-certs-239931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:00.392814   59107 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-239931"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:00.392894   59107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:00.403593   59107 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:00.403675   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:00.413449   59107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0708 20:56:00.430407   59107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:00.447599   59107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0708 20:56:00.465525   59107 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:00.469912   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:00.483255   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:00.623802   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:00.642946   59107 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931 for IP: 192.168.61.126
	I0708 20:56:00.642967   59107 certs.go:194] generating shared ca certs ...
	I0708 20:56:00.642982   59107 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:00.643143   59107 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:00.643184   59107 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:00.643193   59107 certs.go:256] generating profile certs ...
	I0708 20:56:00.643270   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/client.key
	I0708 20:56:00.643317   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key.7743ab88
	I0708 20:56:00.643354   59107 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key
	I0708 20:56:00.643487   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:00.643524   59107 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:00.643533   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:00.643556   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:00.643579   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:00.643604   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:00.643670   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:00.644353   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:00.699260   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:00.752536   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:00.783946   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:00.812524   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0708 20:56:00.843035   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:00.872061   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:00.898805   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 20:56:00.925402   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:00.952114   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:00.984067   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:01.010037   59107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:01.027599   59107 ssh_runner.go:195] Run: openssl version
	I0708 20:56:01.033942   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:01.046273   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051807   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051887   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.058482   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:01.070774   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:01.083215   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088389   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088460   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.094594   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:01.107360   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:01.119973   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125011   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125085   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.131596   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:01.143993   59107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:01.149299   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:01.156201   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:01.162939   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:01.169874   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:01.176264   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:01.182905   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:01.189961   59107 kubeadm.go:391] StartCluster: {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:01.190041   59107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:01.190085   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.238097   59107 cri.go:89] found id: ""
	I0708 20:56:01.238167   59107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:01.250478   59107 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:01.250503   59107 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:01.250509   59107 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:01.250562   59107 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:01.261664   59107 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:01.262667   59107 kubeconfig.go:125] found "embed-certs-239931" server: "https://192.168.61.126:8443"
	I0708 20:56:01.264788   59107 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:01.275846   59107 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.126
	I0708 20:56:01.275888   59107 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:01.275908   59107 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:01.276006   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.318646   59107 cri.go:89] found id: ""
	I0708 20:56:01.318745   59107 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:01.340273   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:01.353325   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:01.353360   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:01.353412   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:01.363659   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:01.363732   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:01.374340   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:01.384284   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:01.384352   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:01.394981   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.405532   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:01.405604   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.416741   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:01.427724   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:01.427812   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:01.439352   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:01.451286   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:01.581829   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.013995   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.432133224s)
	I0708 20:56:03.014024   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.229195   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.305328   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.415409   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:03.415508   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:03.916187   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.416389   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.489450   59107 api_server.go:72] duration metric: took 1.074041899s to wait for apiserver process to appear ...
	I0708 20:56:04.489482   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:04.489516   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:04.490133   59107 api_server.go:269] stopped: https://192.168.61.126:8443/healthz: Get "https://192.168.61.126:8443/healthz": dial tcp 192.168.61.126:8443: connect: connection refused
	I0708 20:56:04.989698   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:03.753446   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.753998   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.754026   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:03.753954   60221 retry.go:31] will retry after 1.922949894s: waiting for machine to come up
	I0708 20:56:05.679244   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679831   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679860   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:05.679794   60221 retry.go:31] will retry after 3.531627288s: waiting for machine to come up
	I0708 20:56:07.673375   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:07.673404   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:07.673420   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.776516   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.776551   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:07.989668   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.996865   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.996897   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.490538   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:08.496342   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:08.496374   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.990583   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:09.001043   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 20:56:09.011126   59107 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:09.011160   59107 api_server.go:131] duration metric: took 4.521668725s to wait for apiserver health ...
	I0708 20:56:09.011171   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:09.011179   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:09.012842   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:09.014197   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:09.041325   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:09.073110   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:09.086225   59107 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:09.086265   59107 system_pods.go:61] "coredns-7db6d8ff4d-wnqsl" [868e66bf-9f86-465f-aad1-d11a6d218ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:09.086276   59107 system_pods.go:61] "etcd-embed-certs-239931" [48815314-6e48-4fe0-b7b1-4a1d2f6610d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:09.086286   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [665311f4-d633-4b93-ae8c-2b43b45fff68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:09.086294   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [4ab6d657-8c74-491c-b965-ac68f2bd323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:09.086309   59107 system_pods.go:61] "kube-proxy-5h5xl" [9b169148-aa75-40a2-b08b-1d579ee179b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 20:56:09.086316   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [012399d8-10a4-407d-a899-3c840dd52ca8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:09.086331   59107 system_pods.go:61] "metrics-server-569cc877fc-h4btg" [c78cfc3c-159f-4a06-b4a0-63f8bd0a6703] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:09.086339   59107 system_pods.go:61] "storage-provisioner" [2ca0ea1d-5d1c-4e18-a871-e035a8946b3c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 20:56:09.086348   59107 system_pods.go:74] duration metric: took 13.216051ms to wait for pod list to return data ...
	I0708 20:56:09.086363   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:09.089689   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:09.089719   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:09.089732   59107 node_conditions.go:105] duration metric: took 3.363611ms to run NodePressure ...
	I0708 20:56:09.089751   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:09.377271   59107 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383148   59107 kubeadm.go:733] kubelet initialised
	I0708 20:56:09.383174   59107 kubeadm.go:734] duration metric: took 5.876526ms waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383183   59107 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:09.388903   59107 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:09.214856   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215410   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:09.215355   60221 retry.go:31] will retry after 3.64169465s: waiting for machine to come up
	I0708 20:56:14.180834   58678 start.go:364] duration metric: took 35.354748041s to acquireMachinesLock for "no-preload-028021"
	I0708 20:56:14.180893   58678 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:56:14.180905   58678 fix.go:54] fixHost starting: 
	I0708 20:56:14.181259   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:56:14.181299   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:56:14.197712   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I0708 20:56:14.198157   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:56:14.198615   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:56:14.198637   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:56:14.198996   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:56:14.199173   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:14.199342   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:56:14.200905   58678 fix.go:112] recreateIfNeeded on no-preload-028021: state=Stopped err=<nil>
	I0708 20:56:14.200930   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	W0708 20:56:14.201103   58678 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:56:14.203062   58678 out.go:177] * Restarting existing kvm2 VM for "no-preload-028021" ...
	I0708 20:56:11.396453   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:13.396672   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:12.860535   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.860988   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Found IP for machine: 192.168.72.163
	I0708 20:56:12.861010   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserving static IP address...
	I0708 20:56:12.861027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has current primary IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.861445   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.861473   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserved static IP address: 192.168.72.163
	I0708 20:56:12.861494   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | skip adding static IP to network mk-default-k8s-diff-port-071971 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"}
	I0708 20:56:12.861515   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Getting to WaitForSSH function...
	I0708 20:56:12.861531   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for SSH to be available...
	I0708 20:56:12.864099   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864436   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.864465   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864631   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH client type: external
	I0708 20:56:12.864663   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa (-rw-------)
	I0708 20:56:12.864693   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:12.864708   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | About to run SSH command:
	I0708 20:56:12.864721   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | exit 0
	I0708 20:56:12.996077   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:12.996459   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetConfigRaw
	I0708 20:56:12.997091   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:12.999431   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.999815   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.999844   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.000145   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:56:13.000354   59655 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:13.000377   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.000558   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.002898   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003255   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.003290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003444   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.003626   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003778   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003930   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.004094   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.004297   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.004311   59655 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:13.119929   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:13.119956   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120203   59655 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071971"
	I0708 20:56:13.120320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120544   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.123750   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.124256   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.124647   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124993   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.125155   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.125339   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.125360   59655 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071971 && echo "default-k8s-diff-port-071971" | sudo tee /etc/hostname
	I0708 20:56:13.256165   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071971
	
	I0708 20:56:13.256199   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.258991   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259342   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.259376   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.259828   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260011   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260149   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.260325   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.260506   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.260530   59655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071971/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:13.381593   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:13.381627   59655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:13.381684   59655 buildroot.go:174] setting up certificates
	I0708 20:56:13.381700   59655 provision.go:84] configureAuth start
	I0708 20:56:13.381716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.382023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:13.385065   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385358   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.385394   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385566   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.387752   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388092   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.388132   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388290   59655 provision.go:143] copyHostCerts
	I0708 20:56:13.388350   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:13.388361   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:13.388408   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:13.388506   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:13.388516   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:13.388536   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:13.388587   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:13.388593   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:13.388610   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:13.389123   59655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071971 san=[127.0.0.1 192.168.72.163 default-k8s-diff-port-071971 localhost minikube]
	I0708 20:56:13.445451   59655 provision.go:177] copyRemoteCerts
	I0708 20:56:13.445509   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:13.445536   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.448926   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449291   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.449320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449579   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.449785   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.449944   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.450097   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:13.542311   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0708 20:56:13.570585   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 20:56:13.597943   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:13.623837   59655 provision.go:87] duration metric: took 242.102893ms to configureAuth
	I0708 20:56:13.623874   59655 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:13.624077   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:13.624144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.626802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627247   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.627277   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627553   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.627738   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.627910   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.628047   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.628214   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.628414   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.628442   59655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:13.930321   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:13.930349   59655 machine.go:97] duration metric: took 929.979999ms to provisionDockerMachine
	I0708 20:56:13.930361   59655 start.go:293] postStartSetup for "default-k8s-diff-port-071971" (driver="kvm2")
	I0708 20:56:13.930371   59655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:13.930385   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.930714   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:13.930747   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.933397   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933704   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.933735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933927   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.934119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.934266   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.934393   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.019603   59655 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:14.024556   59655 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:14.024589   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:14.024651   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:14.024744   59655 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:14.024836   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:14.035798   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:14.062351   59655 start.go:296] duration metric: took 131.974167ms for postStartSetup
	I0708 20:56:14.062402   59655 fix.go:56] duration metric: took 19.193418124s for fixHost
	I0708 20:56:14.062428   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.065264   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.065784   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.065822   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.066027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.066271   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.066965   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:14.067197   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:14.067210   59655 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:14.180654   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472174.151879540
	
	I0708 20:56:14.180683   59655 fix.go:216] guest clock: 1720472174.151879540
	I0708 20:56:14.180695   59655 fix.go:229] Guest: 2024-07-08 20:56:14.15187954 +0000 UTC Remote: 2024-07-08 20:56:14.062408788 +0000 UTC m=+156.804206336 (delta=89.470752ms)
	I0708 20:56:14.180751   59655 fix.go:200] guest clock delta is within tolerance: 89.470752ms
	I0708 20:56:14.180757   59655 start.go:83] releasing machines lock for "default-k8s-diff-port-071971", held for 19.311816598s
	I0708 20:56:14.180802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.181119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:14.183833   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184164   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.184194   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184365   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.184862   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185029   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185105   59655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:14.185152   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.185222   59655 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:14.185248   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.187788   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188135   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188167   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188299   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188328   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188501   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188505   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188641   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.188715   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188803   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.188885   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.189022   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.298253   59655 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:14.305004   59655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:14.457540   59655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:14.464497   59655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:14.464567   59655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:14.482063   59655 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:14.482093   59655 start.go:494] detecting cgroup driver to use...
	I0708 20:56:14.482172   59655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:14.500206   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:14.515905   59655 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:14.515952   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:14.532277   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:14.552772   59655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:14.686229   59655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:14.845428   59655 docker.go:233] disabling docker service ...
	I0708 20:56:14.845496   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:14.863157   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:14.881174   59655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:15.029269   59655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:15.165105   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:15.181619   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:15.202743   59655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:15.202806   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.215848   59655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:15.215925   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.228697   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.240964   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.257002   59655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:15.270309   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.283215   59655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.303235   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.322364   59655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:15.340757   59655 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:15.340836   59655 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:15.360592   59655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:15.372486   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:15.510210   59655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:15.656090   59655 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:15.656169   59655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:15.661847   59655 start.go:562] Will wait 60s for crictl version
	I0708 20:56:15.661917   59655 ssh_runner.go:195] Run: which crictl
	I0708 20:56:15.666004   59655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:15.707842   59655 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:15.707928   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.740434   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.772450   59655 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:14.204596   58678 main.go:141] libmachine: (no-preload-028021) Calling .Start
	I0708 20:56:14.204780   58678 main.go:141] libmachine: (no-preload-028021) Ensuring networks are active...
	I0708 20:56:14.205463   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network default is active
	I0708 20:56:14.205799   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network mk-no-preload-028021 is active
	I0708 20:56:14.206280   58678 main.go:141] libmachine: (no-preload-028021) Getting domain xml...
	I0708 20:56:14.207187   58678 main.go:141] libmachine: (no-preload-028021) Creating domain...
	I0708 20:56:15.514100   58678 main.go:141] libmachine: (no-preload-028021) Waiting to get IP...
	I0708 20:56:15.514946   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.515419   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.515473   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.515397   60369 retry.go:31] will retry after 282.59763ms: waiting for machine to come up
	I0708 20:56:15.799976   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.800525   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.800555   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.800482   60369 retry.go:31] will retry after 377.094067ms: waiting for machine to come up
	I0708 20:56:16.179257   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.179953   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.179979   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.179861   60369 retry.go:31] will retry after 433.953923ms: waiting for machine to come up
	I0708 20:56:15.773711   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:15.776947   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777368   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:15.777400   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777704   59655 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:15.782466   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:15.796924   59655 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:15.797072   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:15.797138   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:15.841838   59655 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:15.841922   59655 ssh_runner.go:195] Run: which lz4
	I0708 20:56:15.846443   59655 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:56:15.851267   59655 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:56:15.851302   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:56:15.397039   59107 pod_ready.go:92] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:15.397070   59107 pod_ready.go:81] duration metric: took 6.008141421s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:15.397082   59107 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405606   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.405638   59107 pod_ready.go:81] duration metric: took 2.008547358s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405653   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411786   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.411810   59107 pod_ready.go:81] duration metric: took 6.14625ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411822   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421681   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.421712   59107 pod_ready.go:81] duration metric: took 2.009879259s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421725   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428235   59107 pod_ready.go:92] pod "kube-proxy-5h5xl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.428260   59107 pod_ready.go:81] duration metric: took 6.527896ms for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428269   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433130   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.433154   59107 pod_ready.go:81] duration metric: took 4.87807ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433163   59107 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:16.615670   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.616225   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.616257   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.616177   60369 retry.go:31] will retry after 489.658115ms: waiting for machine to come up
	I0708 20:56:17.107848   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.108391   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.108420   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.108341   60369 retry.go:31] will retry after 620.239043ms: waiting for machine to come up
	I0708 20:56:17.730239   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.730822   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.730854   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.730758   60369 retry.go:31] will retry after 818.379867ms: waiting for machine to come up
	I0708 20:56:18.550539   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:18.551024   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:18.551049   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:18.550993   60369 retry.go:31] will retry after 1.138596155s: waiting for machine to come up
	I0708 20:56:19.691669   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:19.692214   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:19.692267   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:19.692149   60369 retry.go:31] will retry after 1.467771065s: waiting for machine to come up
	I0708 20:56:21.161367   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:21.161916   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:21.161945   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:21.161854   60369 retry.go:31] will retry after 1.592022559s: waiting for machine to come up
	I0708 20:56:17.447251   59655 crio.go:462] duration metric: took 1.600850063s to copy over tarball
	I0708 20:56:17.447341   59655 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:56:19.773249   59655 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.325874804s)
	I0708 20:56:19.773277   59655 crio.go:469] duration metric: took 2.325993304s to extract the tarball
	I0708 20:56:19.773286   59655 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:19.811911   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:19.859029   59655 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:19.859060   59655 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:19.859070   59655 kubeadm.go:928] updating node { 192.168.72.163 8444 v1.30.2 crio true true} ...
	I0708 20:56:19.859208   59655 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-071971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:19.859281   59655 ssh_runner.go:195] Run: crio config
	I0708 20:56:19.905778   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:19.905806   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:19.905822   59655 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:19.905847   59655 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.163 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071971 NodeName:default-k8s-diff-port-071971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:19.906035   59655 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.163
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071971"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:19.906113   59655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:19.916307   59655 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:19.916388   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:19.926496   59655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0708 20:56:19.947778   59655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:19.969466   59655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0708 20:56:19.991103   59655 ssh_runner.go:195] Run: grep 192.168.72.163	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:19.995180   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:20.008005   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:20.143869   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:20.162694   59655 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971 for IP: 192.168.72.163
	I0708 20:56:20.162713   59655 certs.go:194] generating shared ca certs ...
	I0708 20:56:20.162745   59655 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:20.162930   59655 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:20.162986   59655 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:20.162997   59655 certs.go:256] generating profile certs ...
	I0708 20:56:20.163097   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.key
	I0708 20:56:20.163220   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key.17bd30e8
	I0708 20:56:20.163259   59655 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key
	I0708 20:56:20.163394   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:20.163478   59655 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:20.163493   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:20.163524   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:20.163558   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:20.163594   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:20.163659   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:20.164318   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:20.198987   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:20.251872   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:20.281444   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:20.305751   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0708 20:56:20.332608   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 20:56:20.365206   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:20.399631   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:20.430016   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:20.462126   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:20.492669   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:20.521867   59655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:20.540725   59655 ssh_runner.go:195] Run: openssl version
	I0708 20:56:20.546789   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:20.558515   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563342   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563430   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.570039   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:20.585367   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:20.601217   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605930   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605993   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.612015   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:20.623796   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:20.635305   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640571   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640649   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.648600   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:20.663899   59655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:20.669383   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:20.675967   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:20.682513   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:20.690280   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:20.698720   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:20.705678   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:20.712524   59655 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:20.712643   59655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:20.712700   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.761032   59655 cri.go:89] found id: ""
	I0708 20:56:20.761107   59655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:20.772712   59655 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:20.772736   59655 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:20.772742   59655 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:20.772793   59655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:20.784860   59655 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:20.785974   59655 kubeconfig.go:125] found "default-k8s-diff-port-071971" server: "https://192.168.72.163:8444"
	I0708 20:56:20.788290   59655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:20.799889   59655 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.163
	I0708 20:56:20.799919   59655 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:20.799947   59655 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:20.800011   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.846864   59655 cri.go:89] found id: ""
	I0708 20:56:20.846936   59655 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:20.865883   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:20.877476   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:20.877495   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:20.877548   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 20:56:20.889786   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:20.889853   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:20.902185   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 20:56:20.913510   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:20.913573   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:20.923964   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.934048   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:20.934131   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.945078   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 20:56:20.955290   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:20.955354   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:20.966182   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:20.977508   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.319213   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.511204   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:23.942367   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:22.755738   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:22.756182   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:22.756243   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:22.756167   60369 retry.go:31] will retry after 1.858003233s: waiting for machine to come up
	I0708 20:56:24.616152   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:24.616674   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:24.616703   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:24.616618   60369 retry.go:31] will retry after 2.203640369s: waiting for machine to come up
	I0708 20:56:22.471504   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.152252924s)
	I0708 20:56:22.471539   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.692407   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.756884   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.892773   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:22.892888   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.393789   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.893298   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.941073   59655 api_server.go:72] duration metric: took 1.048301169s to wait for apiserver process to appear ...
	I0708 20:56:23.941100   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:23.941131   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.221991   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:27.222029   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:27.222048   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:26.441670   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:28.939138   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:27.353017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.353069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.442130   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.447304   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.447326   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.941979   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.951850   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.951878   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.441380   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.452031   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.452069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.941613   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.946045   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.946084   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.441485   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.448847   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.448877   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.941906   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.946380   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.946416   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:30.442013   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:30.447291   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 20:56:30.454664   59655 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:30.454693   59655 api_server.go:131] duration metric: took 6.513586414s to wait for apiserver health ...
	I0708 20:56:30.454701   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:30.454707   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:30.456577   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:26.821665   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:26.822266   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:26.822297   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:26.822209   60369 retry.go:31] will retry after 3.478824168s: waiting for machine to come up
	I0708 20:56:30.302329   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:30.302766   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:30.302796   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:30.302707   60369 retry.go:31] will retry after 3.597512692s: waiting for machine to come up
	I0708 20:56:30.458168   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:30.469918   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:30.492348   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:30.503174   59655 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:30.503210   59655 system_pods.go:61] "coredns-7db6d8ff4d-c4tzw" [e5ea7dde-1134-45d0-b3e2-176e6a8f068e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:30.503218   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [693fd668-83c2-43e6-bf43-7b1a9e654db0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:30.503226   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [eadde33a-b967-4a58-9730-d163e6b8c0c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:30.503233   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [99bd8e55-e0a9-4071-a0f0-dc9d1e79b58d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:30.503238   59655 system_pods.go:61] "kube-proxy-vq4l8" [e2a4779c-e8ed-4f5b-872b-d10604936178] Running
	I0708 20:56:30.503244   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [af6b0a79-be1e-4caa-86a6-47ac782ac438] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:30.503250   59655 system_pods.go:61] "metrics-server-569cc877fc-h2dzd" [7075aa8e-0716-4965-8a13-3ed804190b3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:30.503257   59655 system_pods.go:61] "storage-provisioner" [9fca5ac9-cd65-4257-b012-20ded80a39a5] Running
	I0708 20:56:30.503265   59655 system_pods.go:74] duration metric: took 10.887672ms to wait for pod list to return data ...
	I0708 20:56:30.503279   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:30.509137   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:30.509170   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:30.509189   59655 node_conditions.go:105] duration metric: took 5.903588ms to run NodePressure ...
	I0708 20:56:30.509210   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:30.780430   59655 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788138   59655 kubeadm.go:733] kubelet initialised
	I0708 20:56:30.788168   59655 kubeadm.go:734] duration metric: took 7.711132ms waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788177   59655 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:30.797001   59655 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:30.939824   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:32.940860   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:34.941652   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:33.901849   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902332   58678 main.go:141] libmachine: (no-preload-028021) Found IP for machine: 192.168.39.108
	I0708 20:56:33.902356   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has current primary IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902361   58678 main.go:141] libmachine: (no-preload-028021) Reserving static IP address...
	I0708 20:56:33.902766   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.902797   58678 main.go:141] libmachine: (no-preload-028021) DBG | skip adding static IP to network mk-no-preload-028021 - found existing host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"}
	I0708 20:56:33.902808   58678 main.go:141] libmachine: (no-preload-028021) Reserved static IP address: 192.168.39.108
	I0708 20:56:33.902825   58678 main.go:141] libmachine: (no-preload-028021) Waiting for SSH to be available...
	I0708 20:56:33.902835   58678 main.go:141] libmachine: (no-preload-028021) DBG | Getting to WaitForSSH function...
	I0708 20:56:33.905031   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905318   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.905341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905479   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH client type: external
	I0708 20:56:33.905509   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa (-rw-------)
	I0708 20:56:33.905535   58678 main.go:141] libmachine: (no-preload-028021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:33.905560   58678 main.go:141] libmachine: (no-preload-028021) DBG | About to run SSH command:
	I0708 20:56:33.905573   58678 main.go:141] libmachine: (no-preload-028021) DBG | exit 0
	I0708 20:56:34.035510   58678 main.go:141] libmachine: (no-preload-028021) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:34.035876   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetConfigRaw
	I0708 20:56:34.036501   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.039070   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039467   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.039496   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039711   58678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/config.json ...
	I0708 20:56:34.039936   58678 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:34.039955   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:34.040191   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.042269   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042640   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.042666   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042793   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.042954   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043125   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043292   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.043496   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.043662   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.043671   58678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:34.156092   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:34.156143   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156412   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:56:34.156441   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156639   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.159015   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159420   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.159467   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159606   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.159817   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160015   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160214   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.160407   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.160572   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.160584   58678 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-028021 && echo "no-preload-028021" | sudo tee /etc/hostname
	I0708 20:56:34.286222   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-028021
	
	I0708 20:56:34.286250   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.289067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289376   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.289399   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.289832   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.289991   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.290129   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.290310   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.290471   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.290485   58678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-028021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-028021/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-028021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:34.414724   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:34.414749   58678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:34.414790   58678 buildroot.go:174] setting up certificates
	I0708 20:56:34.414799   58678 provision.go:84] configureAuth start
	I0708 20:56:34.414808   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.415115   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.417919   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418241   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.418273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418491   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.421129   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421603   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.421634   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421756   58678 provision.go:143] copyHostCerts
	I0708 20:56:34.421818   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:34.421839   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:34.421906   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:34.422023   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:34.422034   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:34.422064   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:34.422151   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:34.422161   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:34.422196   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:34.422276   58678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.no-preload-028021 san=[127.0.0.1 192.168.39.108 localhost minikube no-preload-028021]
	I0708 20:56:34.634189   58678 provision.go:177] copyRemoteCerts
	I0708 20:56:34.634253   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:34.634281   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.637123   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637364   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.637396   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637609   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.637912   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.638172   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.638410   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:34.726761   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:34.751947   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:56:34.776165   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:56:34.803849   58678 provision.go:87] duration metric: took 389.036659ms to configureAuth
	I0708 20:56:34.803880   58678 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:34.804125   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:34.804202   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.808559   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.808925   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.808966   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.809164   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.809416   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809572   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809710   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.809874   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.810069   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.810097   58678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:35.096796   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:35.096822   58678 machine.go:97] duration metric: took 1.056870853s to provisionDockerMachine
	I0708 20:56:35.096834   58678 start.go:293] postStartSetup for "no-preload-028021" (driver="kvm2")
	I0708 20:56:35.096847   58678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:35.096864   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.097227   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:35.097266   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.100040   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100428   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.100449   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100637   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.100826   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.100967   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.101128   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.187796   58678 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:35.192221   58678 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:35.192248   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:35.192315   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:35.192383   58678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:35.192467   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:35.204227   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:35.230404   58678 start.go:296] duration metric: took 133.555408ms for postStartSetup
	I0708 20:56:35.230446   58678 fix.go:56] duration metric: took 21.04954132s for fixHost
	I0708 20:56:35.230464   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.233341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233654   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.233685   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233878   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.234070   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234248   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234413   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.234611   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:35.234834   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:35.234849   58678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:35.348439   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472195.300246165
	
	I0708 20:56:35.348459   58678 fix.go:216] guest clock: 1720472195.300246165
	I0708 20:56:35.348468   58678 fix.go:229] Guest: 2024-07-08 20:56:35.300246165 +0000 UTC Remote: 2024-07-08 20:56:35.230449891 +0000 UTC m=+338.995803708 (delta=69.796274ms)
	I0708 20:56:35.348487   58678 fix.go:200] guest clock delta is within tolerance: 69.796274ms
	I0708 20:56:35.348492   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 21.167624903s
	I0708 20:56:35.348509   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.348752   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:35.351300   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351779   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.351806   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351977   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352557   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352725   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352799   58678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:35.352839   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.352942   58678 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:35.352969   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.355646   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356037   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356071   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356117   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356267   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356470   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.356555   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356580   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356642   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.356706   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356770   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.356885   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.357020   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.357154   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.438344   58678 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:35.470518   58678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:35.628022   58678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:35.636390   58678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:35.636468   58678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:35.654729   58678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:35.654753   58678 start.go:494] detecting cgroup driver to use...
	I0708 20:56:35.654824   58678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:35.678564   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:35.697122   58678 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:35.697202   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:35.713388   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:35.728254   58678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:35.874433   58678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:36.062521   58678 docker.go:233] disabling docker service ...
	I0708 20:56:36.062614   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:36.081225   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:36.096855   58678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:36.229455   58678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:36.375525   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:36.390772   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:36.411762   58678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:36.411905   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.423149   58678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:36.423218   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.434145   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.447568   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.458758   58678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:36.469393   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.479663   58678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.501298   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.512407   58678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:36.522400   58678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:36.522469   58678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:36.536310   58678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:36.547955   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:36.680408   58678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:36.860344   58678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:36.860416   58678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:36.866153   58678 start.go:562] Will wait 60s for crictl version
	I0708 20:56:36.866221   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:36.871623   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:36.917564   58678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:36.917655   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.954595   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.985788   58678 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:32.805051   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:35.303979   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.303556   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.303581   59655 pod_ready.go:81] duration metric: took 5.506548207s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.303590   59655 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308571   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.308596   59655 pod_ready.go:81] duration metric: took 4.998994ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308610   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314379   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.314402   59655 pod_ready.go:81] duration metric: took 5.784289ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314411   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.942775   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:39.440483   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.987568   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:36.990699   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991105   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:36.991146   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991378   58678 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:36.996102   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:37.012228   58678 kubeadm.go:877] updating cluster {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:37.012390   58678 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:37.012439   58678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:37.050690   58678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:37.050715   58678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 20:56:37.050765   58678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.050988   58678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.051005   58678 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.051146   58678 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.051199   58678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.051323   58678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.051396   58678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.051560   58678 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.052826   58678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.052840   58678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.052853   58678 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052910   58678 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.052742   58678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.052744   58678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.237714   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.238720   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.246932   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0708 20:56:37.253938   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.256152   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.264291   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.304685   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.316620   58678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0708 20:56:37.316664   58678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.316710   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.352464   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.387003   58678 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0708 20:56:37.387039   58678 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.387078   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513840   58678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0708 20:56:37.513886   58678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.513925   58678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0708 20:56:37.513938   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513958   58678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.513987   58678 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0708 20:56:37.514000   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514016   58678 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.514054   58678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0708 20:56:37.514097   58678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.514061   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514136   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514138   58678 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0708 20:56:37.514078   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.514159   58678 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.514191   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514224   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.535635   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.535678   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.596995   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0708 20:56:37.597092   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.597102   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.651190   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0708 20:56:37.651320   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:37.695843   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0708 20:56:37.695944   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0708 20:56:37.695995   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0708 20:56:37.696018   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:37.696020   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.696052   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:37.695849   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0708 20:56:37.696071   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.695875   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0708 20:56:37.696117   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:37.696211   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:37.721410   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0708 20:56:37.721453   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0708 20:56:37.721536   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0708 20:56:37.721644   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:39.890974   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.19489331s)
	I0708 20:56:39.891017   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0708 20:56:39.891070   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (2.194976871s)
	I0708 20:56:39.891096   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0708 20:56:39.891107   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.194875907s)
	I0708 20:56:39.891117   58678 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891120   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0708 20:56:39.891156   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2: (2.194966409s)
	I0708 20:56:39.891175   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891184   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0708 20:56:39.891196   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.169535432s)
	I0708 20:56:39.891212   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0708 20:56:37.824606   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.824634   59655 pod_ready.go:81] duration metric: took 1.510214968s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.824646   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829963   59655 pod_ready.go:92] pod "kube-proxy-vq4l8" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.829988   59655 pod_ready.go:81] duration metric: took 5.334688ms for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829997   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338575   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:38.338611   59655 pod_ready.go:81] duration metric: took 508.60515ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338625   59655 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:40.346498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.939773   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:43.941838   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.962256   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.071056184s)
	I0708 20:56:41.962281   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0708 20:56:41.962304   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:41.962349   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:44.325730   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (2.363358371s)
	I0708 20:56:44.325760   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0708 20:56:44.325789   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:44.325853   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:42.845177   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:44.846215   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.441086   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:48.939348   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.588882   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.263001s)
	I0708 20:56:46.588909   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0708 20:56:46.588931   58678 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:46.588980   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:50.590689   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.001689035s)
	I0708 20:56:50.590724   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0708 20:56:50.590758   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:50.590813   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:47.345179   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:49.346736   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:51.846003   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:50.940095   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:53.441346   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:52.446198   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (1.855362154s)
	I0708 20:56:52.446229   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0708 20:56:52.446247   58678 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:52.446284   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:53.400379   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0708 20:56:53.400419   58678 cache_images.go:123] Successfully loaded all cached images
	I0708 20:56:53.400424   58678 cache_images.go:92] duration metric: took 16.349697925s to LoadCachedImages
	I0708 20:56:53.400436   58678 kubeadm.go:928] updating node { 192.168.39.108 8443 v1.30.2 crio true true} ...
	I0708 20:56:53.400599   58678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-028021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:53.400692   58678 ssh_runner.go:195] Run: crio config
	I0708 20:56:53.452091   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:56:53.452117   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:53.452131   58678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:53.452150   58678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.108 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-028021 NodeName:no-preload-028021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:53.452285   58678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-028021"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:53.452344   58678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:53.464447   58678 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:53.464522   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:53.474930   58678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0708 20:56:53.493701   58678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:53.511491   58678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0708 20:56:53.530848   58678 ssh_runner.go:195] Run: grep 192.168.39.108	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:53.534931   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:53.547590   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:53.658960   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:53.677127   58678 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021 for IP: 192.168.39.108
	I0708 20:56:53.677151   58678 certs.go:194] generating shared ca certs ...
	I0708 20:56:53.677169   58678 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:53.677296   58678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:53.677330   58678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:53.677338   58678 certs.go:256] generating profile certs ...
	I0708 20:56:53.677420   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.key
	I0708 20:56:53.677471   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key.c3084b2b
	I0708 20:56:53.677511   58678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key
	I0708 20:56:53.677613   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:53.677639   58678 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:53.677645   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:53.677677   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:53.677752   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:53.677785   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:53.677825   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:53.680483   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:53.739386   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:53.770850   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:53.813958   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:53.850256   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 20:56:53.891539   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:53.921136   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:53.948966   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:53.977129   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:54.002324   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:54.028222   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:54.054099   58678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:54.073386   58678 ssh_runner.go:195] Run: openssl version
	I0708 20:56:54.079883   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:54.092980   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097451   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097503   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.103507   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:54.115123   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:54.126757   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131534   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131579   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.137333   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:54.148368   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:54.159628   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164230   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164280   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.170068   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:54.182152   58678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:54.187146   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:54.193425   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:54.200491   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:54.207006   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:54.213285   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:54.220313   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:54.227497   58678 kubeadm.go:391] StartCluster: {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:54.227597   58678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:54.227657   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.273025   58678 cri.go:89] found id: ""
	I0708 20:56:54.273094   58678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:54.284942   58678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:54.284965   58678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:54.284972   58678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:54.285023   58678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:54.296666   58678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:54.297740   58678 kubeconfig.go:125] found "no-preload-028021" server: "https://192.168.39.108:8443"
	I0708 20:56:54.299928   58678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:54.310186   58678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.108
	I0708 20:56:54.310224   58678 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:54.310235   58678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:54.310290   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.351640   58678 cri.go:89] found id: ""
	I0708 20:56:54.351709   58678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:54.370292   58678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:54.380551   58678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:54.380571   58678 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:54.380611   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:54.391462   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:54.391525   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:54.401804   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:54.411423   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:54.411501   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:54.422126   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.432236   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:54.432299   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.443001   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:54.454210   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:54.454271   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:54.465426   58678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:54.477714   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:54.593844   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.651092   58678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057214047s)
	I0708 20:56:55.651120   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.862342   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.952093   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:56.070164   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:56.070232   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:53.846869   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.847242   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.941645   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:58.440406   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:56.570644   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.071067   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.099879   58678 api_server.go:72] duration metric: took 1.02971362s to wait for apiserver process to appear ...
	I0708 20:56:57.099907   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:57.099932   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.102677   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.102805   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.102854   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.143035   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.143069   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.600694   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.605315   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:00.605349   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:01.100628   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.106209   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.106235   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:58.345619   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:00.346515   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:01.600656   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.605348   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.605381   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.101023   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.105457   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.105490   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.600058   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.604370   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.604397   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.100641   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.105655   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:03.105685   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.600193   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.604714   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 20:57:03.617761   58678 api_server.go:141] control plane version: v1.30.2
	I0708 20:57:03.617795   58678 api_server.go:131] duration metric: took 6.517881236s to wait for apiserver health ...
	I0708 20:57:03.617805   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:57:03.617811   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:57:03.619739   58678 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:57:00.940450   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.448484   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.621363   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:57:03.635846   58678 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:57:03.667045   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:57:03.686236   58678 system_pods.go:59] 8 kube-system pods found
	I0708 20:57:03.686308   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:57:03.686322   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:57:03.686334   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:57:03.686348   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:57:03.686354   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 20:57:03.686363   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:57:03.686371   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:57:03.686379   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 20:57:03.686390   58678 system_pods.go:74] duration metric: took 19.320099ms to wait for pod list to return data ...
	I0708 20:57:03.686402   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:57:03.696401   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:57:03.696436   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 20:57:03.696449   58678 node_conditions.go:105] duration metric: took 10.038061ms to run NodePressure ...
	I0708 20:57:03.696474   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:57:03.981698   58678 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987357   58678 kubeadm.go:733] kubelet initialised
	I0708 20:57:03.987379   58678 kubeadm.go:734] duration metric: took 5.653044ms waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987387   58678 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:03.993341   58678 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:03.999133   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999165   58678 pod_ready.go:81] duration metric: took 5.798521ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:03.999177   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999188   58678 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.004640   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004666   58678 pod_ready.go:81] duration metric: took 5.471219ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.004676   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004685   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.011313   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011342   58678 pod_ready.go:81] duration metric: took 6.65044ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.011354   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011364   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.071038   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071092   58678 pod_ready.go:81] duration metric: took 59.716762ms for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.071105   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071114   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.470702   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470732   58678 pod_ready.go:81] duration metric: took 399.6044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.470743   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470753   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.871002   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871036   58678 pod_ready.go:81] duration metric: took 400.275337ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.871045   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871052   58678 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:05.270858   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270883   58678 pod_ready.go:81] duration metric: took 399.822389ms for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:05.270892   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270899   58678 pod_ready.go:38] duration metric: took 1.283504929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:05.270914   58678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 20:57:05.284879   58678 ops.go:34] apiserver oom_adj: -16
	I0708 20:57:05.284900   58678 kubeadm.go:591] duration metric: took 10.999921787s to restartPrimaryControlPlane
	I0708 20:57:05.284912   58678 kubeadm.go:393] duration metric: took 11.057424996s to StartCluster
	I0708 20:57:05.284931   58678 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.285024   58678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:57:05.287297   58678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.287607   58678 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 20:57:05.287708   58678 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 20:57:05.287790   58678 addons.go:69] Setting storage-provisioner=true in profile "no-preload-028021"
	I0708 20:57:05.287807   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:57:05.287809   58678 addons.go:69] Setting default-storageclass=true in profile "no-preload-028021"
	I0708 20:57:05.287845   58678 addons.go:69] Setting metrics-server=true in profile "no-preload-028021"
	I0708 20:57:05.287900   58678 addons.go:234] Setting addon metrics-server=true in "no-preload-028021"
	W0708 20:57:05.287912   58678 addons.go:243] addon metrics-server should already be in state true
	I0708 20:57:05.287946   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.287854   58678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-028021"
	I0708 20:57:05.287825   58678 addons.go:234] Setting addon storage-provisioner=true in "no-preload-028021"
	W0708 20:57:05.288007   58678 addons.go:243] addon storage-provisioner should already be in state true
	I0708 20:57:05.288040   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.288276   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288308   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288380   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288382   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288430   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288413   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.289690   58678 out.go:177] * Verifying Kubernetes components...
	I0708 20:57:05.291336   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:57:05.310203   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0708 20:57:05.310610   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.311107   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.311129   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.311527   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.311990   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.312026   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.332966   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0708 20:57:05.332984   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0708 20:57:05.333056   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I0708 20:57:05.333449   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333466   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333497   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333994   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334014   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334138   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334146   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334158   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334163   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334347   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334514   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.334640   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334683   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334822   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.335285   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.335304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.337444   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.338763   58678 addons.go:234] Setting addon default-storageclass=true in "no-preload-028021"
	W0708 20:57:05.338785   58678 addons.go:243] addon default-storageclass should already be in state true
	I0708 20:57:05.338814   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.339217   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.339304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.339800   58678 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 20:57:05.341280   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 20:57:05.341303   58678 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 20:57:05.341327   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.344838   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345488   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.345504   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345683   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.345892   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.346146   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.346326   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.359060   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0708 20:57:05.359692   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.360186   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.360207   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.360545   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.361128   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.361173   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.361352   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0708 20:57:05.361971   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.362509   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.362525   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.362911   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.363148   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.364747   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.366914   58678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:57:05.368450   58678 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.368467   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 20:57:05.368483   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.372067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372368   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.372387   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372767   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.373030   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.373235   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.373389   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.379255   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0708 20:57:05.379732   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.380405   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.380428   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.380832   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.381039   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.382973   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.383191   58678 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.383211   58678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 20:57:05.383231   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.386273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386682   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.386705   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386997   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.387184   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.387336   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.387497   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.506081   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:57:05.525373   58678 node_ready.go:35] waiting up to 6m0s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:05.594638   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 20:57:05.594665   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 20:57:05.615378   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.620306   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 20:57:05.620331   58678 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 20:57:05.639840   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.692078   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:05.692109   58678 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 20:57:05.756364   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:06.822244   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206830336s)
	I0708 20:57:06.822310   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18243745s)
	I0708 20:57:06.822323   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822385   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065981271s)
	I0708 20:57:06.822418   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822432   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822390   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822351   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822504   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822850   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822870   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822879   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822886   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822955   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.822971   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822976   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822993   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822995   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823009   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823020   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823010   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823051   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823154   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823164   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823366   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.823380   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823390   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825436   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.825455   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825465   58678 addons.go:475] Verifying addon metrics-server=true in "no-preload-028021"
	I0708 20:57:06.830088   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.830108   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.830406   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.830420   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.830423   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.832322   58678 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0708 20:57:02.845629   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.353827   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.940469   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:08.439911   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:06.833974   58678 addons.go:510] duration metric: took 1.546270475s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0708 20:57:07.529328   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:09.529406   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:11.030134   58678 node_ready.go:49] node "no-preload-028021" has status "Ready":"True"
	I0708 20:57:11.030162   58678 node_ready.go:38] duration metric: took 5.504751555s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:11.030174   58678 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:11.035309   58678 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039750   58678 pod_ready.go:92] pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.039772   58678 pod_ready.go:81] duration metric: took 4.436756ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039783   58678 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044726   58678 pod_ready.go:92] pod "etcd-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.044748   58678 pod_ready.go:81] duration metric: took 4.958058ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044756   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049083   58678 pod_ready.go:92] pod "kube-apiserver-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.049104   58678 pod_ready.go:81] duration metric: took 4.34014ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049115   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:07.846290   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.344964   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.939618   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.445191   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.056307   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.056817   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.063838   58678 pod_ready.go:92] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.063864   58678 pod_ready.go:81] duration metric: took 5.014740635s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.063875   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082486   58678 pod_ready.go:92] pod "kube-proxy-6p6l6" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.082529   58678 pod_ready.go:81] duration metric: took 18.642044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082545   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092312   58678 pod_ready.go:92] pod "kube-scheduler-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.092337   58678 pod_ready.go:81] duration metric: took 9.783638ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092347   58678 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.353120   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:57:16.353203   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:57:16.355269   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:57:16.355317   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:57:16.355404   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:57:16.355558   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:57:16.355708   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:57:16.355815   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:57:16.358151   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:57:16.358312   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:57:16.358411   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:57:16.358531   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:57:16.358641   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:57:16.358732   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:57:16.358798   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:57:16.358893   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:57:16.359004   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:57:16.359128   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:57:16.359209   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:57:16.359288   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:57:16.359384   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:57:16.359509   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:57:16.359614   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:57:16.359725   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:57:16.359794   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:57:16.359881   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:57:16.359963   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:57:16.360002   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:57:16.360099   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:57:16.361960   57466 out.go:204]   - Booting up control plane ...
	I0708 20:57:16.362053   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:57:16.362196   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:57:16.362283   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:57:16.362402   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:57:16.362589   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:57:16.362819   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:57:16.362930   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363170   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363242   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363473   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363580   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363786   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363873   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364093   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364247   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364435   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364445   57466 kubeadm.go:309] 
	I0708 20:57:16.364476   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:57:16.364533   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:57:16.364541   57466 kubeadm.go:309] 
	I0708 20:57:16.364601   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:57:16.364636   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:57:16.364796   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:57:16.364820   57466 kubeadm.go:309] 
	I0708 20:57:16.364958   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:57:16.365016   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:57:16.365057   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:57:16.365063   57466 kubeadm.go:309] 
	I0708 20:57:16.365208   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:57:16.365339   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:57:16.365356   57466 kubeadm.go:309] 
	I0708 20:57:16.365490   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:57:16.365589   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:57:16.365694   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:57:16.365869   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:57:16.365969   57466 kubeadm.go:309] 
	I0708 20:57:16.365972   57466 kubeadm.go:393] duration metric: took 7m56.670441698s to StartCluster
	I0708 20:57:16.366023   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:57:16.366090   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:57:16.435868   57466 cri.go:89] found id: ""
	I0708 20:57:16.435896   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.435904   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:57:16.435910   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:57:16.435969   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:57:16.478844   57466 cri.go:89] found id: ""
	I0708 20:57:16.478881   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.478896   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:57:16.478904   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:57:16.478974   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:57:16.517414   57466 cri.go:89] found id: ""
	I0708 20:57:16.517439   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.517448   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:57:16.517455   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:57:16.517516   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:57:16.557036   57466 cri.go:89] found id: ""
	I0708 20:57:16.557063   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.557074   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:57:16.557081   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:57:16.557153   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:57:16.593604   57466 cri.go:89] found id: ""
	I0708 20:57:16.593631   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.593641   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:57:16.593648   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:57:16.593704   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:57:16.634143   57466 cri.go:89] found id: ""
	I0708 20:57:16.634173   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.634183   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:57:16.634190   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:57:16.634248   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:57:16.676553   57466 cri.go:89] found id: ""
	I0708 20:57:16.676585   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.676595   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:57:16.676602   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:57:16.676663   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:57:16.715652   57466 cri.go:89] found id: ""
	I0708 20:57:16.715674   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.715682   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:57:16.715692   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:57:16.715703   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:57:16.730747   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:57:16.730776   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:57:16.814950   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:57:16.814976   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:57:16.815005   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:57:16.921144   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:57:16.921194   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:57:16.973261   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:57:16.973294   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 20:57:17.031242   57466 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0708 20:57:17.031307   57466 out.go:239] * 
	W0708 20:57:17.031362   57466 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.031389   57466 out.go:239] * 
	W0708 20:57:17.032214   57466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:57:17.035847   57466 out.go:177] 
	W0708 20:57:17.037198   57466 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.037247   57466 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0708 20:57:17.037274   57466 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0708 20:57:17.039077   57466 out.go:177] 
	I0708 20:57:12.345241   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:14.346235   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.347467   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.940334   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:17.943302   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:18.102691   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:20.599066   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:18.847908   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:21.345112   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:20.441347   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:22.939786   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:24.940449   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:22.600192   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:25.100175   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:23.346438   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:25.845181   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.439923   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:29.940540   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.600010   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:30.099104   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.845456   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:29.845526   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.440285   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.939729   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.101616   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.598135   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.345268   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.844782   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.845440   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.940110   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:38.940964   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.600034   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:39.099711   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.100745   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:38.847223   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.344382   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.441047   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.939510   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.599982   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:46.101913   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.345029   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:45.345390   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:45.939787   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:47.940956   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:49.941949   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:48.598871   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:50.600154   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:47.346271   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:49.346661   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:51.844897   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:52.439646   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:54.440569   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:52.604096   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:55.103841   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:54.345832   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:56.845398   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:56.440640   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:58.939537   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:57.598505   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:00.098797   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:58.848087   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:01.346566   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:00.940434   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:03.439927   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:02.602188   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:05.100284   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:03.848841   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:06.346912   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:05.441676   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:07.942369   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:07.599099   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:09.601188   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:08.848926   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:11.346458   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:10.439620   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:12.440274   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:14.939694   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:12.098918   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:14.099419   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:13.844947   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:15.845203   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:16.940812   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:18.941307   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:16.599322   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:19.098815   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:21.100160   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:17.845975   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:20.347071   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:21.439802   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:23.441183   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:23.598459   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:26.098717   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:22.844674   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:24.845210   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:26.848564   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:25.939783   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:28.439490   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:28.099236   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:30.599130   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:29.344306   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:31.345070   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:30.439832   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:32.440229   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:34.441525   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:32.600143   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:35.100068   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:33.345938   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:35.845421   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:36.939642   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:38.941263   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:37.599587   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:40.099121   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:37.845529   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:40.345830   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:41.441175   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:43.941076   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:42.099418   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:44.101452   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:42.844426   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:44.846831   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:45.941732   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:48.440398   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:46.599328   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:48.600055   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:51.099949   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:47.347094   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:49.846223   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:50.940172   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:52.940229   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:54.941034   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:53.100619   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:55.599681   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:52.347726   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:54.845461   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:56.846142   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:56.941957   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.439408   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:57.600406   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.600450   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.344802   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:01.345852   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:01.939259   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:03.940182   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:02.101218   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:04.600651   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:03.845810   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:05.846170   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:05.940757   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:08.439635   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:07.100571   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:09.100718   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:08.344894   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:10.346744   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:10.440413   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:12.440882   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:14.940151   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:11.601260   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:13.603589   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:16.112928   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:12.848135   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:15.346591   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:17.440326   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:19.440421   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:18.598791   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:20.600589   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:17.845413   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:19.849057   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:21.941414   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:24.441214   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:23.100854   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:25.599374   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:22.346925   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:24.845239   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:26.941311   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:28.948332   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:28.100928   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:30.600465   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:27.345835   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:29.846655   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:31.848193   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:31.440572   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:33.939354   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:33.100068   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:35.601159   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:34.345252   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:36.346479   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:35.939843   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:37.941381   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:38.100393   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.102157   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:38.844435   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.845328   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.438849   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:42.441256   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:44.442877   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:42.601119   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:45.101132   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:43.345149   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:45.345522   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:46.940287   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:48.941589   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:47.101717   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:49.598367   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:47.846030   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:49.846247   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:51.438745   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:53.441587   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:51.599309   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:54.105369   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:56.110085   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:52.347026   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:54.845971   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:55.939702   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:57.940731   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:58.598821   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:00.599435   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:57.345043   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:59.346796   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:01.347030   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:00.439467   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:02.443994   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:04.941721   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:02.599994   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:05.098379   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:03.845802   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:05.846016   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:07.439561   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:09.440326   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:07.099339   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:09.599746   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:08.345432   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:10.347888   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:11.940331   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:13.940496   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:12.100751   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:14.597860   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:12.349653   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:14.846452   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:16.440554   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:18.441219   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:19.434076   59107 pod_ready.go:81] duration metric: took 4m0.000896796s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	E0708 21:00:19.434112   59107 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0708 21:00:19.434131   59107 pod_ready.go:38] duration metric: took 4m10.050938227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:00:19.434157   59107 kubeadm.go:591] duration metric: took 4m18.183643708s to restartPrimaryControlPlane
	W0708 21:00:19.434219   59107 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 21:00:19.434258   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:00:16.598896   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:18.598974   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:20.599027   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:17.345157   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:19.345498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:21.346939   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:22.599140   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:24.600455   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:23.347325   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:25.846384   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:27.104536   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:29.598836   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:27.847635   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:30.345065   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:31.600246   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:34.099964   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:32.348256   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:34.846942   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:36.598075   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:38.599175   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:40.599720   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:37.345319   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:38.339580   59655 pod_ready.go:81] duration metric: took 4m0.000925316s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	E0708 21:00:38.339615   59655 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0708 21:00:38.339635   59655 pod_ready.go:38] duration metric: took 4m7.551446129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:00:38.339667   59655 kubeadm.go:591] duration metric: took 4m17.566917749s to restartPrimaryControlPlane
	W0708 21:00:38.339731   59655 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 21:00:38.339763   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:00:43.101768   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:45.102321   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:47.599770   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:50.100703   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:51.419295   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.985013246s)
	I0708 21:00:51.419373   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:00:51.438876   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:00:51.451558   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:00:51.463932   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:00:51.463959   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 21:00:51.464013   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 21:00:51.476729   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:00:51.476791   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:00:51.488357   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 21:00:51.499650   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:00:51.499720   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:00:51.510559   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 21:00:51.522747   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:00:51.522821   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:00:51.534156   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 21:00:51.545057   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:00:51.545123   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:00:51.556712   59107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:00:51.766960   59107 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:00:52.599619   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:55.102565   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:01.185862   59107 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:01:01.185936   59107 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:01:01.186061   59107 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:01:01.186246   59107 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:01:01.186375   59107 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:01:01.186477   59107 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:01:01.188387   59107 out.go:204]   - Generating certificates and keys ...
	I0708 21:01:01.188489   59107 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:01:01.188575   59107 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:01:01.188655   59107 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:01:01.188754   59107 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:01:01.188856   59107 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:01:01.188937   59107 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:01:01.189015   59107 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:01:01.189107   59107 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:01:01.189216   59107 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:01:01.189326   59107 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:01:01.189381   59107 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:01:01.189445   59107 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:01:01.189504   59107 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:01:01.189571   59107 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:01:01.189636   59107 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:01:01.189732   59107 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:01:01.189822   59107 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:01:01.189939   59107 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:01:01.190019   59107 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:01:01.192426   59107 out.go:204]   - Booting up control plane ...
	I0708 21:01:01.192527   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:01:01.192598   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:01:01.192674   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:01:01.192795   59107 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:01:01.192892   59107 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:01:01.192949   59107 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:01:01.193078   59107 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:01:01.193150   59107 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:01:01.193204   59107 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001227366s
	I0708 21:01:01.193274   59107 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:01:01.193329   59107 kubeadm.go:309] [api-check] The API server is healthy after 5.506719576s
	I0708 21:01:01.193428   59107 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 21:01:01.193574   59107 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 21:01:01.193655   59107 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 21:01:01.193854   59107 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-239931 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 21:01:01.193936   59107 kubeadm.go:309] [bootstrap-token] Using token: uu1yg0.6mx8u39sjlxfysca
	I0708 21:01:01.196508   59107 out.go:204]   - Configuring RBAC rules ...
	I0708 21:01:01.196638   59107 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 21:01:01.196748   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 21:01:01.196867   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 21:01:01.196978   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 21:01:01.197141   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 21:01:01.197217   59107 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 21:01:01.197316   59107 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 21:01:01.197355   59107 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 21:01:01.197397   59107 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 21:01:01.197403   59107 kubeadm.go:309] 
	I0708 21:01:01.197451   59107 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 21:01:01.197457   59107 kubeadm.go:309] 
	I0708 21:01:01.197542   59107 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 21:01:01.197555   59107 kubeadm.go:309] 
	I0708 21:01:01.197597   59107 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 21:01:01.197673   59107 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 21:01:01.197748   59107 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 21:01:01.197761   59107 kubeadm.go:309] 
	I0708 21:01:01.197850   59107 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 21:01:01.197860   59107 kubeadm.go:309] 
	I0708 21:01:01.197903   59107 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 21:01:01.197912   59107 kubeadm.go:309] 
	I0708 21:01:01.197971   59107 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 21:01:01.198059   59107 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 21:01:01.198155   59107 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 21:01:01.198165   59107 kubeadm.go:309] 
	I0708 21:01:01.198279   59107 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 21:01:01.198389   59107 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 21:01:01.198400   59107 kubeadm.go:309] 
	I0708 21:01:01.198515   59107 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token uu1yg0.6mx8u39sjlxfysca \
	I0708 21:01:01.198663   59107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 21:01:01.198697   59107 kubeadm.go:309] 	--control-plane 
	I0708 21:01:01.198706   59107 kubeadm.go:309] 
	I0708 21:01:01.198821   59107 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 21:01:01.198830   59107 kubeadm.go:309] 
	I0708 21:01:01.198942   59107 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token uu1yg0.6mx8u39sjlxfysca \
	I0708 21:01:01.199078   59107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 21:01:01.199095   59107 cni.go:84] Creating CNI manager for ""
	I0708 21:01:01.199104   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:01:01.201409   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 21:00:57.600428   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:00.101501   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:01.202540   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 21:01:01.214691   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 21:01:01.238039   59107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 21:01:01.238180   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:01.238204   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-239931 minikube.k8s.io/updated_at=2024_07_08T21_01_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=embed-certs-239931 minikube.k8s.io/primary=true
	I0708 21:01:01.255228   59107 ops.go:34] apiserver oom_adj: -16
	I0708 21:01:01.441736   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:01.942570   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.442775   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.941941   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:03.441910   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:03.942762   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:04.442791   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:04.942122   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.600102   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:04.601357   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:05.442031   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:05.942414   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:06.442353   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:06.942075   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:07.442007   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:07.941952   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:08.442578   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:08.942110   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:09.442438   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:09.942436   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:10.666697   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.326909913s)
	I0708 21:01:10.666766   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:10.684044   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:01:10.695291   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:01:10.705771   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:01:10.705790   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 21:01:10.705829   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 21:01:10.717858   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:01:10.717911   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:01:10.728721   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 21:01:10.738917   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:01:10.738985   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:01:10.749795   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 21:01:10.760976   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:01:10.761036   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:01:10.771625   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 21:01:10.781677   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:01:10.781738   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:01:10.791622   59655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:01:10.855152   59655 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:01:10.855246   59655 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:01:11.027005   59655 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:01:11.027132   59655 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:01:11.027245   59655 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:01:11.262898   59655 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:01:07.098267   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:09.099083   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:11.099398   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:11.264777   59655 out.go:204]   - Generating certificates and keys ...
	I0708 21:01:11.264897   59655 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:01:11.265011   59655 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:01:11.265143   59655 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:01:11.265245   59655 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:01:11.265331   59655 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:01:11.265412   59655 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:01:11.265516   59655 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:01:11.265601   59655 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:01:11.265692   59655 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:01:11.265806   59655 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:01:11.265883   59655 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:01:11.265979   59655 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:01:11.307094   59655 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:01:11.410219   59655 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:01:11.840751   59655 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:01:12.163906   59655 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:01:12.260797   59655 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:01:12.261513   59655 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:01:12.264128   59655 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:01:12.266095   59655 out.go:204]   - Booting up control plane ...
	I0708 21:01:12.266212   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:01:12.266301   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:01:12.267540   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:01:12.290823   59655 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:01:12.291578   59655 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:01:12.291693   59655 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:01:10.442308   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:10.942270   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:11.442233   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:11.942533   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:12.442040   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:12.942629   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:13.441853   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:13.565655   59107 kubeadm.go:1107] duration metric: took 12.327535547s to wait for elevateKubeSystemPrivileges
	W0708 21:01:13.565704   59107 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 21:01:13.565714   59107 kubeadm.go:393] duration metric: took 5m12.375759038s to StartCluster
	I0708 21:01:13.565736   59107 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:13.565845   59107 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:01:13.568610   59107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:13.568940   59107 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:01:13.568980   59107 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 21:01:13.569061   59107 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-239931"
	I0708 21:01:13.569098   59107 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-239931"
	W0708 21:01:13.569113   59107 addons.go:243] addon storage-provisioner should already be in state true
	I0708 21:01:13.569136   59107 addons.go:69] Setting metrics-server=true in profile "embed-certs-239931"
	I0708 21:01:13.569098   59107 addons.go:69] Setting default-storageclass=true in profile "embed-certs-239931"
	I0708 21:01:13.569169   59107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-239931"
	I0708 21:01:13.569178   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:01:13.569149   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.569185   59107 addons.go:234] Setting addon metrics-server=true in "embed-certs-239931"
	W0708 21:01:13.569244   59107 addons.go:243] addon metrics-server should already be in state true
	I0708 21:01:13.569274   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.569617   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569639   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569648   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.569671   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.569673   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569698   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.570670   59107 out.go:177] * Verifying Kubernetes components...
	I0708 21:01:13.572338   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:01:13.590692   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40615
	I0708 21:01:13.590708   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0708 21:01:13.590701   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0708 21:01:13.591271   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591375   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591622   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591792   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.591806   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.591888   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.591909   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.592348   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.592368   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.592387   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.592422   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.592655   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.593065   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.593092   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.593568   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.594139   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.594196   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.596834   59107 addons.go:234] Setting addon default-storageclass=true in "embed-certs-239931"
	W0708 21:01:13.596857   59107 addons.go:243] addon default-storageclass should already be in state true
	I0708 21:01:13.596892   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.597258   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.597278   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.615398   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0708 21:01:13.616090   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.617374   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.617395   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.617542   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37809
	I0708 21:01:13.618025   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.618066   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.618450   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.618538   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.618563   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.618953   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.619151   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.621015   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.622114   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I0708 21:01:13.622533   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.623046   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.623071   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.623346   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.623757   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.624750   59107 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 21:01:13.625744   59107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:01:13.626604   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 21:01:13.626626   59107 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 21:01:13.626650   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.627717   59107 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:13.627737   59107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 21:01:13.627756   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.628207   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.628245   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.631548   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.633692   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.633737   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.634732   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.634960   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.635186   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.635262   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.635282   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.635415   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.635581   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.635946   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.636122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.636282   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.636468   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.650948   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0708 21:01:13.651543   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.652143   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.652165   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.652659   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.652835   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.654717   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.654971   59107 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:13.654988   59107 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 21:01:13.655006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.658670   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.659361   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.659475   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.659800   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.660109   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.660275   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.660406   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.813860   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:01:13.832841   59107 node_ready.go:35] waiting up to 6m0s for node "embed-certs-239931" to be "Ready" ...
	I0708 21:01:13.842398   59107 node_ready.go:49] node "embed-certs-239931" has status "Ready":"True"
	I0708 21:01:13.842420   59107 node_ready.go:38] duration metric: took 9.540746ms for node "embed-certs-239931" to be "Ready" ...
	I0708 21:01:13.842430   59107 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:13.853426   59107 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.861421   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.861451   59107 pod_ready.go:81] duration metric: took 7.991733ms for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.861466   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.873198   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.873228   59107 pod_ready.go:81] duration metric: took 11.754017ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.873243   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.882509   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.882560   59107 pod_ready.go:81] duration metric: took 9.307056ms for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.882574   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.890814   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.890843   59107 pod_ready.go:81] duration metric: took 8.26049ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.890854   59107 pod_ready.go:38] duration metric: took 48.414688ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:13.890872   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:13.890934   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:13.913170   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 21:01:13.913199   59107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 21:01:13.936334   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:13.942642   59107 api_server.go:72] duration metric: took 373.624334ms to wait for apiserver process to appear ...
	I0708 21:01:13.942673   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:13.942696   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 21:01:13.947241   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 21:01:13.948330   59107 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:13.948354   59107 api_server.go:131] duration metric: took 5.673644ms to wait for apiserver health ...
	I0708 21:01:13.948364   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:13.968333   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:13.999888   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 21:01:13.999920   59107 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 21:01:14.072446   59107 system_pods.go:59] 5 kube-system pods found
	I0708 21:01:14.072553   59107 system_pods.go:61] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.072575   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.072594   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.072608   59107 system_pods.go:61] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending
	I0708 21:01:14.072621   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.072637   59107 system_pods.go:74] duration metric: took 124.266452ms to wait for pod list to return data ...
	I0708 21:01:14.072663   59107 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:14.111310   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:14.111337   59107 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 21:01:14.196596   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:14.248043   59107 default_sa.go:45] found service account: "default"
	I0708 21:01:14.248075   59107 default_sa.go:55] duration metric: took 175.396297ms for default service account to be created ...
	I0708 21:01:14.248086   59107 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:14.381129   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.381166   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.381490   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.381507   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.381517   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.381525   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.383203   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:14.383213   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.383229   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.430533   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.430558   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.430835   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:14.431498   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.431558   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.440088   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.440129   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.440140   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.440148   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.440156   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.440162   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.440171   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.440176   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.440199   59107 retry.go:31] will retry after 211.74015ms: missing components: kube-dns, kube-proxy
	I0708 21:01:14.660845   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.660901   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.660916   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.660928   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.660938   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.660946   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.660990   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.661002   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.661036   59107 retry.go:31] will retry after 318.627165ms: missing components: kube-dns, kube-proxy
	I0708 21:01:14.988296   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.988336   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.988348   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.988359   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.988369   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.988376   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.988388   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.988398   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.988425   59107 retry.go:31] will retry after 333.622066ms: missing components: kube-dns, kube-proxy
	I0708 21:01:15.024853   59107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.056470802s)
	I0708 21:01:15.024902   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.024914   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.025237   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.025264   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.025266   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.025279   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.025288   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.025550   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.025566   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.348381   59107 system_pods.go:86] 8 kube-system pods found
	I0708 21:01:15.348419   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.348430   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.348440   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:15.348448   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:15.348455   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:15.348464   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:15.348473   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:15.348483   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:15.348502   59107 retry.go:31] will retry after 415.910372ms: missing components: kube-dns, kube-proxy
	I0708 21:01:15.736384   59107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.539741133s)
	I0708 21:01:15.736440   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.736456   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.736743   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.736782   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.736763   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.736803   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.736851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.737097   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.737135   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.737148   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.737157   59107 addons.go:475] Verifying addon metrics-server=true in "embed-certs-239931"
	I0708 21:01:15.739025   59107 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0708 21:01:13.102963   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:15.601580   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:16.101049   58678 pod_ready.go:81] duration metric: took 4m0.00868677s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 21:01:16.101081   58678 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0708 21:01:16.101094   58678 pod_ready.go:38] duration metric: took 4m5.070908601s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:16.101112   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:16.101147   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:16.101210   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:16.175601   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:16.175631   58678 cri.go:89] found id: ""
	I0708 21:01:16.175642   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:16.175703   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.182938   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:16.183013   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:16.261385   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:16.261411   58678 cri.go:89] found id: ""
	I0708 21:01:16.261423   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:16.261483   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.266231   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:16.266310   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:15.741167   59107 addons.go:510] duration metric: took 2.172185316s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0708 21:01:15.890659   59107 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:15.890702   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.890713   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.890723   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:15.890731   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:15.890738   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:15.890745   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Running
	I0708 21:01:15.890751   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:15.890759   59107 system_pods.go:89] "metrics-server-569cc877fc-f2dkn" [1d3c3e8e-356d-40b9-8add-35eec096e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:15.890772   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:15.890790   59107 retry.go:31] will retry after 557.749423ms: missing components: kube-dns
	I0708 21:01:16.457046   59107 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:16.457093   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:16.457105   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:16.457114   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:16.457124   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:16.457131   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:16.457137   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Running
	I0708 21:01:16.457143   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:16.457153   59107 system_pods.go:89] "metrics-server-569cc877fc-f2dkn" [1d3c3e8e-356d-40b9-8add-35eec096e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:16.457173   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:16.457183   59107 system_pods.go:126] duration metric: took 2.209089992s to wait for k8s-apps to be running ...
	I0708 21:01:16.457196   59107 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:16.457251   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:16.474652   59107 system_svc.go:56] duration metric: took 17.443712ms WaitForService to wait for kubelet
	I0708 21:01:16.474691   59107 kubeadm.go:576] duration metric: took 2.905677883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:16.474715   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:16.478431   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:16.478456   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:16.478480   59107 node_conditions.go:105] duration metric: took 3.758433ms to run NodePressure ...
	I0708 21:01:16.478502   59107 start.go:240] waiting for startup goroutines ...
	I0708 21:01:16.478515   59107 start.go:245] waiting for cluster config update ...
	I0708 21:01:16.478529   59107 start.go:254] writing updated cluster config ...
	I0708 21:01:16.478860   59107 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:16.536046   59107 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:16.538131   59107 out.go:177] * Done! kubectl is now configured to use "embed-certs-239931" cluster and "default" namespace by default
	I0708 21:01:12.440116   59655 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:01:12.440237   59655 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:01:13.441567   59655 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001312349s
	I0708 21:01:13.441690   59655 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:01:18.943345   59655 kubeadm.go:309] [api-check] The API server is healthy after 5.501634999s
	I0708 21:01:18.963728   59655 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 21:01:18.980036   59655 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 21:01:19.028362   59655 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 21:01:19.028635   59655 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-071971 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 21:01:19.051700   59655 kubeadm.go:309] [bootstrap-token] Using token: guoi3f.tsy4dvdlokyfqa2b
	I0708 21:01:19.053224   59655 out.go:204]   - Configuring RBAC rules ...
	I0708 21:01:19.053323   59655 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 21:01:19.063058   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 21:01:19.077711   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 21:01:19.090415   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 21:01:19.095539   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 21:01:19.101465   59655 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 21:01:19.351634   59655 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 21:01:19.809053   59655 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 21:01:20.359069   59655 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 21:01:20.359125   59655 kubeadm.go:309] 
	I0708 21:01:20.359193   59655 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 21:01:20.359227   59655 kubeadm.go:309] 
	I0708 21:01:20.359368   59655 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 21:01:20.359379   59655 kubeadm.go:309] 
	I0708 21:01:20.359439   59655 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 21:01:20.359553   59655 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 21:01:20.359613   59655 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 21:01:20.359624   59655 kubeadm.go:309] 
	I0708 21:01:20.359686   59655 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 21:01:20.359694   59655 kubeadm.go:309] 
	I0708 21:01:20.359733   59655 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 21:01:20.359740   59655 kubeadm.go:309] 
	I0708 21:01:20.359787   59655 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 21:01:20.359899   59655 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 21:01:20.359994   59655 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 21:01:20.360003   59655 kubeadm.go:309] 
	I0708 21:01:20.360096   59655 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 21:01:20.360194   59655 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 21:01:20.360202   59655 kubeadm.go:309] 
	I0708 21:01:20.360311   59655 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token guoi3f.tsy4dvdlokyfqa2b \
	I0708 21:01:20.360468   59655 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 21:01:20.360507   59655 kubeadm.go:309] 	--control-plane 
	I0708 21:01:20.360516   59655 kubeadm.go:309] 
	I0708 21:01:20.360628   59655 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 21:01:20.360639   59655 kubeadm.go:309] 
	I0708 21:01:20.360765   59655 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token guoi3f.tsy4dvdlokyfqa2b \
	I0708 21:01:20.360891   59655 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 21:01:20.361857   59655 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:01:20.361894   59655 cni.go:84] Creating CNI manager for ""
	I0708 21:01:20.361910   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:01:20.363579   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 21:01:16.309299   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:16.309328   58678 cri.go:89] found id: ""
	I0708 21:01:16.309337   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:16.309403   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.314236   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:16.314320   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:16.371891   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:16.371919   58678 cri.go:89] found id: ""
	I0708 21:01:16.371937   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:16.372008   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.380409   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:16.380480   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:16.428411   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:16.428441   58678 cri.go:89] found id: ""
	I0708 21:01:16.428452   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:16.428514   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.433310   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:16.433390   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:16.474785   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:16.474807   58678 cri.go:89] found id: ""
	I0708 21:01:16.474816   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:16.474882   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.480849   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:16.480933   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:16.529115   58678 cri.go:89] found id: ""
	I0708 21:01:16.529136   58678 logs.go:276] 0 containers: []
	W0708 21:01:16.529146   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:16.529153   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:16.529222   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:16.576499   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:16.576519   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:16.576527   58678 cri.go:89] found id: ""
	I0708 21:01:16.576536   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:16.576584   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.581261   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.587704   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:16.587733   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:16.651329   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:16.651385   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:16.706341   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:16.706380   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:17.302518   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:17.302570   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:17.373619   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:17.373651   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:17.414687   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:17.414722   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:17.470462   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:17.470499   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:17.487151   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:17.487189   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:17.625611   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:17.625655   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:17.673291   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:17.673325   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:17.712222   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:17.712253   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:17.752635   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:17.752665   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:17.794056   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:17.794085   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:20.341805   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:20.362405   58678 api_server.go:72] duration metric: took 4m15.074761342s to wait for apiserver process to appear ...
	I0708 21:01:20.362430   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:20.362465   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:20.362523   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:20.409947   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:20.409974   58678 cri.go:89] found id: ""
	I0708 21:01:20.409983   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:20.410040   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.414415   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:20.414476   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:20.463162   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:20.463186   58678 cri.go:89] found id: ""
	I0708 21:01:20.463196   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:20.463263   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.468905   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:20.468986   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:20.514265   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:20.514291   58678 cri.go:89] found id: ""
	I0708 21:01:20.514299   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:20.514357   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.519003   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:20.519081   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:20.565097   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:20.565122   58678 cri.go:89] found id: ""
	I0708 21:01:20.565132   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:20.565190   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.569971   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:20.570048   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:20.614435   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:20.614459   58678 cri.go:89] found id: ""
	I0708 21:01:20.614469   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:20.614525   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.619745   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:20.619824   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:20.660213   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:20.660235   58678 cri.go:89] found id: ""
	I0708 21:01:20.660242   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:20.660292   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.664740   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:20.664822   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:20.710279   58678 cri.go:89] found id: ""
	I0708 21:01:20.710300   58678 logs.go:276] 0 containers: []
	W0708 21:01:20.710307   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:20.710312   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:20.710359   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:20.751880   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:20.751906   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:20.751910   58678 cri.go:89] found id: ""
	I0708 21:01:20.751917   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:20.752028   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.756530   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.760679   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:20.760705   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:20.800525   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:20.800556   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:20.845629   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:20.845666   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:20.364837   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 21:01:20.376977   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 21:01:20.400133   59655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 21:01:20.400241   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:20.400291   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-071971 minikube.k8s.io/updated_at=2024_07_08T21_01_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=default-k8s-diff-port-071971 minikube.k8s.io/primary=true
	I0708 21:01:20.597429   59655 ops.go:34] apiserver oom_adj: -16
	I0708 21:01:20.597490   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.098582   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.597812   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:22.097790   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.356988   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:21.357025   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:21.416130   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:21.416160   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:21.431831   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:21.431865   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:21.479568   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:21.479597   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:21.527937   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:21.527970   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:21.569569   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:21.569605   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:21.691646   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:21.691678   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:21.737949   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:21.737975   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:21.789038   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:21.789069   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:21.831677   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:21.831703   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:24.380502   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 21:01:24.385139   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 21:01:24.386116   58678 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:24.386137   58678 api_server.go:131] duration metric: took 4.023699983s to wait for apiserver health ...
	I0708 21:01:24.386146   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:24.386171   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:24.386225   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:24.423786   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:24.423809   58678 cri.go:89] found id: ""
	I0708 21:01:24.423816   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:24.423869   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.428385   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:24.428447   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:24.467186   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:24.467206   58678 cri.go:89] found id: ""
	I0708 21:01:24.467213   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:24.467269   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.472208   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:24.472273   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:24.511157   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:24.511188   58678 cri.go:89] found id: ""
	I0708 21:01:24.511199   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:24.511266   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.516077   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:24.516144   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:24.556095   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:24.556115   58678 cri.go:89] found id: ""
	I0708 21:01:24.556122   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:24.556171   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.560735   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:24.560795   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:24.602473   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:24.602498   58678 cri.go:89] found id: ""
	I0708 21:01:24.602508   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:24.602562   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.608926   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:24.609003   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:24.653230   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:24.653258   58678 cri.go:89] found id: ""
	I0708 21:01:24.653267   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:24.653327   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.657884   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:24.657954   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:24.700775   58678 cri.go:89] found id: ""
	I0708 21:01:24.700800   58678 logs.go:276] 0 containers: []
	W0708 21:01:24.700810   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:24.700817   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:24.700876   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:24.738593   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:24.738619   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:24.738625   58678 cri.go:89] found id: ""
	I0708 21:01:24.738633   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:24.738689   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.743324   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.747684   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:24.747709   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:24.800431   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:24.800467   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:24.910702   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:24.910738   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:24.967323   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:24.967355   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:25.012335   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:25.012367   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:25.393024   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:25.393064   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:25.449280   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:25.449315   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:25.488676   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:25.488703   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:25.503705   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:25.503734   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:25.551111   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:25.551155   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:25.598388   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:25.598425   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:25.642052   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:25.642087   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:25.680632   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:25.680665   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:22.597628   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:23.098128   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:23.597756   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:24.097555   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:24.598149   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:25.098149   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:25.598255   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:26.097514   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:26.598211   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:27.097610   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.229251   58678 system_pods.go:59] 8 kube-system pods found
	I0708 21:01:28.229286   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running
	I0708 21:01:28.229293   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running
	I0708 21:01:28.229298   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running
	I0708 21:01:28.229304   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running
	I0708 21:01:28.229308   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 21:01:28.229312   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running
	I0708 21:01:28.229321   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:28.229327   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 21:01:28.229337   58678 system_pods.go:74] duration metric: took 3.843183956s to wait for pod list to return data ...
	I0708 21:01:28.229347   58678 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:28.232297   58678 default_sa.go:45] found service account: "default"
	I0708 21:01:28.232323   58678 default_sa.go:55] duration metric: took 2.96709ms for default service account to be created ...
	I0708 21:01:28.232333   58678 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:28.240720   58678 system_pods.go:86] 8 kube-system pods found
	I0708 21:01:28.240750   58678 system_pods.go:89] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running
	I0708 21:01:28.240755   58678 system_pods.go:89] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running
	I0708 21:01:28.240760   58678 system_pods.go:89] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running
	I0708 21:01:28.240765   58678 system_pods.go:89] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running
	I0708 21:01:28.240770   58678 system_pods.go:89] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 21:01:28.240774   58678 system_pods.go:89] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running
	I0708 21:01:28.240781   58678 system_pods.go:89] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:28.240787   58678 system_pods.go:89] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 21:01:28.240794   58678 system_pods.go:126] duration metric: took 8.454141ms to wait for k8s-apps to be running ...
	I0708 21:01:28.240804   58678 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:28.240855   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:28.256600   58678 system_svc.go:56] duration metric: took 15.789082ms WaitForService to wait for kubelet
	I0708 21:01:28.256630   58678 kubeadm.go:576] duration metric: took 4m22.968988646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:28.256654   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:28.260384   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:28.260402   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:28.260412   58678 node_conditions.go:105] duration metric: took 3.753004ms to run NodePressure ...
	I0708 21:01:28.260422   58678 start.go:240] waiting for startup goroutines ...
	I0708 21:01:28.260429   58678 start.go:245] waiting for cluster config update ...
	I0708 21:01:28.260438   58678 start.go:254] writing updated cluster config ...
	I0708 21:01:28.260686   58678 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:28.311517   58678 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:28.313560   58678 out.go:177] * Done! kubectl is now configured to use "no-preload-028021" cluster and "default" namespace by default
	I0708 21:01:27.598457   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.098475   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.598380   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:29.097496   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:29.598229   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:30.097844   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:30.598323   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:31.097781   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:31.598085   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:32.098438   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:32.598450   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.098414   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.597823   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.688717   59655 kubeadm.go:1107] duration metric: took 13.288534329s to wait for elevateKubeSystemPrivileges
	W0708 21:01:33.688756   59655 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 21:01:33.688765   59655 kubeadm.go:393] duration metric: took 5m12.976251287s to StartCluster
	I0708 21:01:33.688782   59655 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:33.688874   59655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:01:33.690446   59655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:33.690691   59655 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:01:33.690814   59655 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 21:01:33.690875   59655 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071971"
	I0708 21:01:33.690893   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:01:33.690907   59655 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071971"
	I0708 21:01:33.690902   59655 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071971"
	W0708 21:01:33.690915   59655 addons.go:243] addon storage-provisioner should already be in state true
	I0708 21:01:33.690914   59655 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071971"
	I0708 21:01:33.690939   59655 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071971"
	I0708 21:01:33.690945   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.690957   59655 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071971"
	W0708 21:01:33.690968   59655 addons.go:243] addon metrics-server should already be in state true
	I0708 21:01:33.691002   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.691272   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691274   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691294   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.691299   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.691323   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691361   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.692506   59655 out.go:177] * Verifying Kubernetes components...
	I0708 21:01:33.694134   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:01:33.708343   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0708 21:01:33.708681   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0708 21:01:33.708849   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.709011   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.709402   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.709421   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.709559   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.709578   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.709795   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.709864   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.710365   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.710411   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.710417   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.710445   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.710809   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0708 21:01:33.711278   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.711858   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.711892   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.712294   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.712604   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.716565   59655 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071971"
	W0708 21:01:33.716590   59655 addons.go:243] addon default-storageclass should already be in state true
	I0708 21:01:33.716620   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.716990   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.717041   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.728113   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0708 21:01:33.728257   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0708 21:01:33.728694   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.728742   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.729182   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.729211   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.729331   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.729353   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.729605   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.729663   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.729781   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.729846   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.731832   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.731878   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.734021   59655 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:01:33.734026   59655 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 21:01:33.736062   59655 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:33.736094   59655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 21:01:33.736122   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.736174   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 21:01:33.736192   59655 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 21:01:33.736222   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.736793   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0708 21:01:33.737419   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.739820   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.739837   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.740075   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740272   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.740463   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.740484   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.740967   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.741060   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.741213   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.741225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.741279   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.741309   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.741438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.741596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.741587   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.741730   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.741820   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.758223   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0708 21:01:33.758739   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.759237   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.759254   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.759633   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.759909   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.761455   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.761644   59655 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:33.761656   59655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 21:01:33.761669   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.764245   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.764541   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.764563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.764701   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.764872   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.765022   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.765126   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.926862   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:01:33.980155   59655 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071971" to be "Ready" ...
	I0708 21:01:33.993505   59655 node_ready.go:49] node "default-k8s-diff-port-071971" has status "Ready":"True"
	I0708 21:01:33.993526   59655 node_ready.go:38] duration metric: took 13.344616ms for node "default-k8s-diff-port-071971" to be "Ready" ...
	I0708 21:01:33.993534   59655 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:34.001402   59655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:34.045900   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:34.058039   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 21:01:34.058059   59655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 21:01:34.102931   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:34.121513   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 21:01:34.121541   59655 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 21:01:34.190181   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:34.190208   59655 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 21:01:34.232200   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:35.071867   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.071888   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.071977   59655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.026035336s)
	I0708 21:01:35.072026   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.072044   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.072157   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.072192   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.072205   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.072212   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.073887   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.073887   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.073917   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.073989   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.074003   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.074013   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.073907   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.074111   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.074438   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.074461   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.146813   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.146840   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.147181   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.147201   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.337952   59655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.105709862s)
	I0708 21:01:35.338010   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.338023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.338415   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.338447   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.338461   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.338471   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.338484   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.338733   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.338751   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.338763   59655 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-071971"
	I0708 21:01:35.340678   59655 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0708 21:01:35.341902   59655 addons.go:510] duration metric: took 1.651084154s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0708 21:01:36.011439   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:37.008538   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.008567   59655 pod_ready.go:81] duration metric: took 3.0071384s for pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.008582   59655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.013291   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.013313   59655 pod_ready.go:81] duration metric: took 4.723566ms for pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.013326   59655 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.017974   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.017997   59655 pod_ready.go:81] duration metric: took 4.66297ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.018009   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.022526   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.022550   59655 pod_ready.go:81] duration metric: took 4.533312ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.022563   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.027009   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.027032   59655 pod_ready.go:81] duration metric: took 4.462202ms for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.027042   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l2mdd" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.406030   59655 pod_ready.go:92] pod "kube-proxy-l2mdd" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.406055   59655 pod_ready.go:81] duration metric: took 379.00677ms for pod "kube-proxy-l2mdd" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.406064   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.806120   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.806141   59655 pod_ready.go:81] duration metric: took 400.070718ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.806151   59655 pod_ready.go:38] duration metric: took 3.812606006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:37.806165   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:37.806214   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:37.822846   59655 api_server.go:72] duration metric: took 4.132126389s to wait for apiserver process to appear ...
	I0708 21:01:37.822872   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:37.822889   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 21:01:37.827017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 21:01:37.827906   59655 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:37.827930   59655 api_server.go:131] duration metric: took 5.051704ms to wait for apiserver health ...
	I0708 21:01:37.827938   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:38.010909   59655 system_pods.go:59] 9 kube-system pods found
	I0708 21:01:38.010937   59655 system_pods.go:61] "coredns-7db6d8ff4d-8msvk" [38c1e0eb-5eb4-4acb-a5ae-c72871884e3d] Running
	I0708 21:01:38.010942   59655 system_pods.go:61] "coredns-7db6d8ff4d-hq7zj" [ddb0f99d-a91d-4bb7-96e7-695b6101a601] Running
	I0708 21:01:38.010946   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [e3399214-404c-423e-9648-b4d920028a92] Running
	I0708 21:01:38.010949   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [7b726b49-c243-4126-b6d2-fc12abc9a042] Running
	I0708 21:01:38.010953   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [6a731125-daa4-4da1-b9e0-1206da592fde] Running
	I0708 21:01:38.010956   59655 system_pods.go:61] "kube-proxy-l2mdd" [b1d70ae2-ed86-49bd-8910-a12c5cd8091a] Running
	I0708 21:01:38.010959   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [dc238033-038e-49ec-ba48-392b0ec2f7bd] Running
	I0708 21:01:38.010965   59655 system_pods.go:61] "metrics-server-569cc877fc-k8vhl" [09f957f3-d76f-4f21-b9a6-e5b249d07e1e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:38.010970   59655 system_pods.go:61] "storage-provisioner" [805a8fdb-ed9e-4f80-a2c9-7d8a0155b228] Running
	I0708 21:01:38.010979   59655 system_pods.go:74] duration metric: took 183.034922ms to wait for pod list to return data ...
	I0708 21:01:38.010987   59655 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:38.205307   59655 default_sa.go:45] found service account: "default"
	I0708 21:01:38.205331   59655 default_sa.go:55] duration metric: took 194.338319ms for default service account to be created ...
	I0708 21:01:38.205340   59655 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:38.410958   59655 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:38.410988   59655 system_pods.go:89] "coredns-7db6d8ff4d-8msvk" [38c1e0eb-5eb4-4acb-a5ae-c72871884e3d] Running
	I0708 21:01:38.410995   59655 system_pods.go:89] "coredns-7db6d8ff4d-hq7zj" [ddb0f99d-a91d-4bb7-96e7-695b6101a601] Running
	I0708 21:01:38.411000   59655 system_pods.go:89] "etcd-default-k8s-diff-port-071971" [e3399214-404c-423e-9648-b4d920028a92] Running
	I0708 21:01:38.411005   59655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071971" [7b726b49-c243-4126-b6d2-fc12abc9a042] Running
	I0708 21:01:38.411009   59655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071971" [6a731125-daa4-4da1-b9e0-1206da592fde] Running
	I0708 21:01:38.411013   59655 system_pods.go:89] "kube-proxy-l2mdd" [b1d70ae2-ed86-49bd-8910-a12c5cd8091a] Running
	I0708 21:01:38.411017   59655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071971" [dc238033-038e-49ec-ba48-392b0ec2f7bd] Running
	I0708 21:01:38.411024   59655 system_pods.go:89] "metrics-server-569cc877fc-k8vhl" [09f957f3-d76f-4f21-b9a6-e5b249d07e1e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:38.411029   59655 system_pods.go:89] "storage-provisioner" [805a8fdb-ed9e-4f80-a2c9-7d8a0155b228] Running
	I0708 21:01:38.411040   59655 system_pods.go:126] duration metric: took 205.695019ms to wait for k8s-apps to be running ...
	I0708 21:01:38.411050   59655 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:38.411092   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:38.428218   59655 system_svc.go:56] duration metric: took 17.158999ms WaitForService to wait for kubelet
	I0708 21:01:38.428248   59655 kubeadm.go:576] duration metric: took 4.737530934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:38.428270   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:38.606369   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:38.606394   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:38.606404   59655 node_conditions.go:105] duration metric: took 178.130401ms to run NodePressure ...
	I0708 21:01:38.606415   59655 start.go:240] waiting for startup goroutines ...
	I0708 21:01:38.606423   59655 start.go:245] waiting for cluster config update ...
	I0708 21:01:38.606432   59655 start.go:254] writing updated cluster config ...
	I0708 21:01:38.606686   59655 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:38.657280   59655 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:38.659556   59655 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-071971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.128289082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473040128266952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d9ebd48-700e-4a45-89a3-caa403e3ca52 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.128882508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d156ccd-3a88-4504-b474-941b46f391be name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.128951726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d156ccd-3a88-4504-b474-941b46f391be name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.129180562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2e3069d015518dcd5e4c0967245dd74359ccdd3a693e5b4e26b330a139e95ab9,PodSandboxId:6afaa0f9dfe4869e8cc4dd4b3b075fdeb333c5e34088f77329936236ede1710a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472495932351496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8msvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38c1e0eb-5eb4-4acb-a5ae-c72871884e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 67ba72c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4084256f6479f4d4d67c4cf0c6e045ed54a7e9d883968077655fa6a188e7e5a,PodSandboxId:424bb8d1df2945e4c7a6543ecea7af6889b52de644565ac54774a8466116fa83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472495480249719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 805a8fdb-ed9e-4f80-a2c9-7d8a0155b228,},Annotations:map[string]string{io.kubernetes.container.hash: 881740b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de092804020ec874ad903cba82425d744cade6acadd234fae7472c54a580e7b,PodSandboxId:0d491e8ede82b38f0c69cd28c624735670d471e8454bbba7ed0ebb55519e9f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472494400846679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hq7zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddb0f99d-a91d-4bb7-96e7-695b6101a601,},Annotations:map[string]string{io.kubernetes.container.hash: 5c1d43b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e3b0cd648694b3f58bf5d849690114c88e9bbf8bb427f3f7a291c723ea4ac,PodSandboxId:d5eb5df2c91fca807a98e2633a3323bc0632af36985b1a5ea834a384058c1ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1720472494099867717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2mdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d70ae2-ed86-49bd-8910-a12c5cd8091a,},Annotations:map[string]string{io.kubernetes.container.hash: 4395f9e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6924e8ced977682d418cea0d436ce49cf79ee382272cb973c8dce7ef6eed6b5,PodSandboxId:a70bc3eb6f4c04162a76fcf65ff5dce7b7a4359f108796f57dd38de4f85e5e9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172047247376840774
4,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd817aef551a1a373ed796646422588,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f55d96f0b61615e83effa00dfff2f7f1cb7042fa84dd01741ec99c489c1cb0b,PodSandboxId:9000c90118635dcdea0100dab133192632f107ee54d7a238d153e5b98fc2fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472473767173365,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df570c9bdb1120e2db1c21b23efdd45,},Annotations:map[string]string{io.kubernetes.container.hash: 1a36d12c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3647b50ce1b4e99d8a409635d93fb22ffbdad34501c3dcbf031498e75ffbab,PodSandboxId:32e80034e1af0e67a39de4df58fe89b2e58887fa59c554adb1298f70c9c2673f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472473712140892,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cecc367fdaa42e3448bb0470688d7b39,},Annotations:map[string]string{io.kubernetes.container.hash: 451cdd04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16947ba6fb46a98e68f1a9f8639e8ceb7d4ce698bbbdc562e43dfbfb921bc130,PodSandboxId:53fa2bbde8261450cb7eb5ad812de328c035611520be6db541d4abc3822737ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472473716706863,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 443e4b2ad13f1980b427a0563ef15fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d156ccd-3a88-4504-b474-941b46f391be name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.168837835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04923ae1-4e69-4c08-a88f-ea3bfb015f79 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.168936782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04923ae1-4e69-4c08-a88f-ea3bfb015f79 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.170757321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fd66a79-51b6-4b08-82d2-6126b60a71c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.171284885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473040171244262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fd66a79-51b6-4b08-82d2-6126b60a71c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.172322385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37e46c48-e91e-44d1-a1d5-8d6d1d9f40e7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.172428989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37e46c48-e91e-44d1-a1d5-8d6d1d9f40e7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.172728213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2e3069d015518dcd5e4c0967245dd74359ccdd3a693e5b4e26b330a139e95ab9,PodSandboxId:6afaa0f9dfe4869e8cc4dd4b3b075fdeb333c5e34088f77329936236ede1710a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472495932351496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8msvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38c1e0eb-5eb4-4acb-a5ae-c72871884e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 67ba72c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4084256f6479f4d4d67c4cf0c6e045ed54a7e9d883968077655fa6a188e7e5a,PodSandboxId:424bb8d1df2945e4c7a6543ecea7af6889b52de644565ac54774a8466116fa83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472495480249719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 805a8fdb-ed9e-4f80-a2c9-7d8a0155b228,},Annotations:map[string]string{io.kubernetes.container.hash: 881740b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de092804020ec874ad903cba82425d744cade6acadd234fae7472c54a580e7b,PodSandboxId:0d491e8ede82b38f0c69cd28c624735670d471e8454bbba7ed0ebb55519e9f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472494400846679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hq7zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddb0f99d-a91d-4bb7-96e7-695b6101a601,},Annotations:map[string]string{io.kubernetes.container.hash: 5c1d43b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e3b0cd648694b3f58bf5d849690114c88e9bbf8bb427f3f7a291c723ea4ac,PodSandboxId:d5eb5df2c91fca807a98e2633a3323bc0632af36985b1a5ea834a384058c1ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1720472494099867717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2mdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d70ae2-ed86-49bd-8910-a12c5cd8091a,},Annotations:map[string]string{io.kubernetes.container.hash: 4395f9e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6924e8ced977682d418cea0d436ce49cf79ee382272cb973c8dce7ef6eed6b5,PodSandboxId:a70bc3eb6f4c04162a76fcf65ff5dce7b7a4359f108796f57dd38de4f85e5e9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172047247376840774
4,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd817aef551a1a373ed796646422588,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f55d96f0b61615e83effa00dfff2f7f1cb7042fa84dd01741ec99c489c1cb0b,PodSandboxId:9000c90118635dcdea0100dab133192632f107ee54d7a238d153e5b98fc2fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472473767173365,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df570c9bdb1120e2db1c21b23efdd45,},Annotations:map[string]string{io.kubernetes.container.hash: 1a36d12c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3647b50ce1b4e99d8a409635d93fb22ffbdad34501c3dcbf031498e75ffbab,PodSandboxId:32e80034e1af0e67a39de4df58fe89b2e58887fa59c554adb1298f70c9c2673f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472473712140892,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cecc367fdaa42e3448bb0470688d7b39,},Annotations:map[string]string{io.kubernetes.container.hash: 451cdd04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16947ba6fb46a98e68f1a9f8639e8ceb7d4ce698bbbdc562e43dfbfb921bc130,PodSandboxId:53fa2bbde8261450cb7eb5ad812de328c035611520be6db541d4abc3822737ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472473716706863,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 443e4b2ad13f1980b427a0563ef15fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37e46c48-e91e-44d1-a1d5-8d6d1d9f40e7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.214192332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac766927-54e6-4dea-b787-34ca54f055e2 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.214265982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac766927-54e6-4dea-b787-34ca54f055e2 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.215874380Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=309b7b39-c25b-4605-a0fb-5cb55f79f839 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.216378566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473040216340826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=309b7b39-c25b-4605-a0fb-5cb55f79f839 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.217050269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52f17ae4-17d9-4b50-9000-c159605438ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.217117502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52f17ae4-17d9-4b50-9000-c159605438ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.217340897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2e3069d015518dcd5e4c0967245dd74359ccdd3a693e5b4e26b330a139e95ab9,PodSandboxId:6afaa0f9dfe4869e8cc4dd4b3b075fdeb333c5e34088f77329936236ede1710a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472495932351496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8msvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38c1e0eb-5eb4-4acb-a5ae-c72871884e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 67ba72c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4084256f6479f4d4d67c4cf0c6e045ed54a7e9d883968077655fa6a188e7e5a,PodSandboxId:424bb8d1df2945e4c7a6543ecea7af6889b52de644565ac54774a8466116fa83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472495480249719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 805a8fdb-ed9e-4f80-a2c9-7d8a0155b228,},Annotations:map[string]string{io.kubernetes.container.hash: 881740b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de092804020ec874ad903cba82425d744cade6acadd234fae7472c54a580e7b,PodSandboxId:0d491e8ede82b38f0c69cd28c624735670d471e8454bbba7ed0ebb55519e9f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472494400846679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hq7zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddb0f99d-a91d-4bb7-96e7-695b6101a601,},Annotations:map[string]string{io.kubernetes.container.hash: 5c1d43b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e3b0cd648694b3f58bf5d849690114c88e9bbf8bb427f3f7a291c723ea4ac,PodSandboxId:d5eb5df2c91fca807a98e2633a3323bc0632af36985b1a5ea834a384058c1ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1720472494099867717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2mdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d70ae2-ed86-49bd-8910-a12c5cd8091a,},Annotations:map[string]string{io.kubernetes.container.hash: 4395f9e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6924e8ced977682d418cea0d436ce49cf79ee382272cb973c8dce7ef6eed6b5,PodSandboxId:a70bc3eb6f4c04162a76fcf65ff5dce7b7a4359f108796f57dd38de4f85e5e9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172047247376840774
4,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd817aef551a1a373ed796646422588,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f55d96f0b61615e83effa00dfff2f7f1cb7042fa84dd01741ec99c489c1cb0b,PodSandboxId:9000c90118635dcdea0100dab133192632f107ee54d7a238d153e5b98fc2fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472473767173365,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df570c9bdb1120e2db1c21b23efdd45,},Annotations:map[string]string{io.kubernetes.container.hash: 1a36d12c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3647b50ce1b4e99d8a409635d93fb22ffbdad34501c3dcbf031498e75ffbab,PodSandboxId:32e80034e1af0e67a39de4df58fe89b2e58887fa59c554adb1298f70c9c2673f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472473712140892,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cecc367fdaa42e3448bb0470688d7b39,},Annotations:map[string]string{io.kubernetes.container.hash: 451cdd04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16947ba6fb46a98e68f1a9f8639e8ceb7d4ce698bbbdc562e43dfbfb921bc130,PodSandboxId:53fa2bbde8261450cb7eb5ad812de328c035611520be6db541d4abc3822737ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472473716706863,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 443e4b2ad13f1980b427a0563ef15fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52f17ae4-17d9-4b50-9000-c159605438ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.258023132Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05786621-99e8-45a7-9fe4-6560ee0ef252 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.258119012Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05786621-99e8-45a7-9fe4-6560ee0ef252 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.259174698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04a8417e-f710-4daa-9973-085283f6185e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.259676738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473040259550897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04a8417e-f710-4daa-9973-085283f6185e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.260135743Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c59e3696-3861-4ad1-a17a-c34bf29c9556 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.260190936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c59e3696-3861-4ad1-a17a-c34bf29c9556 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:10:40 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:10:40.260397593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2e3069d015518dcd5e4c0967245dd74359ccdd3a693e5b4e26b330a139e95ab9,PodSandboxId:6afaa0f9dfe4869e8cc4dd4b3b075fdeb333c5e34088f77329936236ede1710a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472495932351496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8msvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38c1e0eb-5eb4-4acb-a5ae-c72871884e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 67ba72c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4084256f6479f4d4d67c4cf0c6e045ed54a7e9d883968077655fa6a188e7e5a,PodSandboxId:424bb8d1df2945e4c7a6543ecea7af6889b52de644565ac54774a8466116fa83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472495480249719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 805a8fdb-ed9e-4f80-a2c9-7d8a0155b228,},Annotations:map[string]string{io.kubernetes.container.hash: 881740b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de092804020ec874ad903cba82425d744cade6acadd234fae7472c54a580e7b,PodSandboxId:0d491e8ede82b38f0c69cd28c624735670d471e8454bbba7ed0ebb55519e9f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472494400846679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hq7zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddb0f99d-a91d-4bb7-96e7-695b6101a601,},Annotations:map[string]string{io.kubernetes.container.hash: 5c1d43b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e3b0cd648694b3f58bf5d849690114c88e9bbf8bb427f3f7a291c723ea4ac,PodSandboxId:d5eb5df2c91fca807a98e2633a3323bc0632af36985b1a5ea834a384058c1ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1720472494099867717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2mdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d70ae2-ed86-49bd-8910-a12c5cd8091a,},Annotations:map[string]string{io.kubernetes.container.hash: 4395f9e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6924e8ced977682d418cea0d436ce49cf79ee382272cb973c8dce7ef6eed6b5,PodSandboxId:a70bc3eb6f4c04162a76fcf65ff5dce7b7a4359f108796f57dd38de4f85e5e9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172047247376840774
4,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd817aef551a1a373ed796646422588,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f55d96f0b61615e83effa00dfff2f7f1cb7042fa84dd01741ec99c489c1cb0b,PodSandboxId:9000c90118635dcdea0100dab133192632f107ee54d7a238d153e5b98fc2fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472473767173365,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df570c9bdb1120e2db1c21b23efdd45,},Annotations:map[string]string{io.kubernetes.container.hash: 1a36d12c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3647b50ce1b4e99d8a409635d93fb22ffbdad34501c3dcbf031498e75ffbab,PodSandboxId:32e80034e1af0e67a39de4df58fe89b2e58887fa59c554adb1298f70c9c2673f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472473712140892,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cecc367fdaa42e3448bb0470688d7b39,},Annotations:map[string]string{io.kubernetes.container.hash: 451cdd04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16947ba6fb46a98e68f1a9f8639e8ceb7d4ce698bbbdc562e43dfbfb921bc130,PodSandboxId:53fa2bbde8261450cb7eb5ad812de328c035611520be6db541d4abc3822737ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472473716706863,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 443e4b2ad13f1980b427a0563ef15fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c59e3696-3861-4ad1-a17a-c34bf29c9556 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e3069d015518       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   6afaa0f9dfe48       coredns-7db6d8ff4d-8msvk
	e4084256f6479       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   424bb8d1df294       storage-provisioner
	3de092804020e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   0d491e8ede82b       coredns-7db6d8ff4d-hq7zj
	3e4e3b0cd6486       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   9 minutes ago       Running             kube-proxy                0                   d5eb5df2c91fc       kube-proxy-l2mdd
	b6924e8ced977       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   9 minutes ago       Running             kube-scheduler            2                   a70bc3eb6f4c0       kube-scheduler-default-k8s-diff-port-071971
	0f55d96f0b616       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   9000c90118635       etcd-default-k8s-diff-port-071971
	16947ba6fb46a       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   9 minutes ago       Running             kube-controller-manager   2                   53fa2bbde8261       kube-controller-manager-default-k8s-diff-port-071971
	3e3647b50ce1b       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   9 minutes ago       Running             kube-apiserver            2                   32e80034e1af0       kube-apiserver-default-k8s-diff-port-071971
	
	
	==> coredns [2e3069d015518dcd5e4c0967245dd74359ccdd3a693e5b4e26b330a139e95ab9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [3de092804020ec874ad903cba82425d744cade6acadd234fae7472c54a580e7b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-071971
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-071971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=default-k8s-diff-port-071971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T21_01_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 21:01:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-071971
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 21:10:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 21:06:47 +0000   Mon, 08 Jul 2024 21:01:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 21:06:47 +0000   Mon, 08 Jul 2024 21:01:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 21:06:47 +0000   Mon, 08 Jul 2024 21:01:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 21:06:47 +0000   Mon, 08 Jul 2024 21:01:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.163
	  Hostname:    default-k8s-diff-port-071971
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9971f0cfcb78465ebb3b469ae22caf80
	  System UUID:                9971f0cf-cb78-465e-bb3b-469ae22caf80
	  Boot ID:                    d6b9f9cb-247a-44ef-8525-631937b2bb57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8msvk                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-hq7zj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-071971                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-071971             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-071971    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-l2mdd                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-071971             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-k8vhl                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node default-k8s-diff-port-071971 event: Registered Node default-k8s-diff-port-071971 in Controller
	
	
	==> dmesg <==
	[  +0.051149] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041333] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul 8 20:56] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.340585] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.378992] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.689125] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.135370] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.186742] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.159466] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.338648] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.629238] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.069894] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.476423] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.573033] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.324864] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.027136] kauditd_printk_skb: 27 callbacks suppressed
	[Jul 8 21:01] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.686524] systemd-fstab-generator[3577]: Ignoring "noauto" option for root device
	[  +4.750982] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.332042] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[ +14.375769] systemd-fstab-generator[4117]: Ignoring "noauto" option for root device
	[  +0.008630] kauditd_printk_skb: 14 callbacks suppressed
	[Jul 8 21:02] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [0f55d96f0b61615e83effa00dfff2f7f1cb7042fa84dd01741ec99c489c1cb0b] <==
	{"level":"info","ts":"2024-07-08T21:01:14.183035Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T21:01:14.183244Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3dd8974a0ddcfcd8","initial-advertise-peer-urls":["https://192.168.72.163:2380"],"listen-peer-urls":["https://192.168.72.163:2380"],"advertise-client-urls":["https://192.168.72.163:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.163:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T21:01:14.183289Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T21:01:14.183409Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.163:2380"}
	{"level":"info","ts":"2024-07-08T21:01:14.18344Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.163:2380"}
	{"level":"info","ts":"2024-07-08T21:01:14.186262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dd8974a0ddcfcd8 switched to configuration voters=(4456478175599066328)"}
	{"level":"info","ts":"2024-07-08T21:01:14.186503Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31866a174e81d2aa","local-member-id":"3dd8974a0ddcfcd8","added-peer-id":"3dd8974a0ddcfcd8","added-peer-peer-urls":["https://192.168.72.163:2380"]}
	{"level":"info","ts":"2024-07-08T21:01:15.145727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dd8974a0ddcfcd8 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-08T21:01:15.145763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dd8974a0ddcfcd8 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-08T21:01:15.145783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dd8974a0ddcfcd8 received MsgPreVoteResp from 3dd8974a0ddcfcd8 at term 1"}
	{"level":"info","ts":"2024-07-08T21:01:15.145798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dd8974a0ddcfcd8 became candidate at term 2"}
	{"level":"info","ts":"2024-07-08T21:01:15.145806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dd8974a0ddcfcd8 received MsgVoteResp from 3dd8974a0ddcfcd8 at term 2"}
	{"level":"info","ts":"2024-07-08T21:01:15.145818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dd8974a0ddcfcd8 became leader at term 2"}
	{"level":"info","ts":"2024-07-08T21:01:15.145828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dd8974a0ddcfcd8 elected leader 3dd8974a0ddcfcd8 at term 2"}
	{"level":"info","ts":"2024-07-08T21:01:15.147728Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3dd8974a0ddcfcd8","local-member-attributes":"{Name:default-k8s-diff-port-071971 ClientURLs:[https://192.168.72.163:2379]}","request-path":"/0/members/3dd8974a0ddcfcd8/attributes","cluster-id":"31866a174e81d2aa","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T21:01:15.147744Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:01:15.147913Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T21:01:15.148374Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T21:01:15.148633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T21:01:15.148672Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T21:01:15.149365Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31866a174e81d2aa","local-member-id":"3dd8974a0ddcfcd8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:01:15.149434Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:01:15.149451Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:01:15.150543Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.163:2379"}
	{"level":"info","ts":"2024-07-08T21:01:15.151347Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:10:40 up 14 min,  0 users,  load average: 0.56, 0.35, 0.21
	Linux default-k8s-diff-port-071971 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3e3647b50ce1b4e99d8a409635d93fb22ffbdad34501c3dcbf031498e75ffbab] <==
	I0708 21:04:36.012224       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:06:16.931531       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:06:16.931720       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0708 21:06:17.931924       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:06:17.932021       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:06:17.932034       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:06:17.932089       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:06:17.932134       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:06:17.933403       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:07:17.933182       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:07:17.933303       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:07:17.933317       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:07:17.934549       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:07:17.934638       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:07:17.934647       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:09:17.933819       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:09:17.934210       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:09:17.934241       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:09:17.934923       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:09:17.934969       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:09:17.936111       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [16947ba6fb46a98e68f1a9f8639e8ceb7d4ce698bbbdc562e43dfbfb921bc130] <==
	I0708 21:05:04.013432       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:05:33.546729       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:05:34.023123       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:06:03.551273       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:06:04.033465       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:06:33.556770       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:06:34.043695       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:07:03.562455       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:07:04.052502       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0708 21:07:23.709412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="390.101µs"
	E0708 21:07:33.567307       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:07:34.064777       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0708 21:07:38.704662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="191.12µs"
	E0708 21:08:03.572433       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:08:04.074729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:08:33.577474       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:08:34.082432       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:09:03.582412       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:09:04.092221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:09:33.589096       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:09:34.100123       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:10:03.595117       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:10:04.108344       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:10:33.600970       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:10:34.118484       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3e4e3b0cd648694b3f58bf5d849690114c88e9bbf8bb427f3f7a291c723ea4ac] <==
	I0708 21:01:34.617748       1 server_linux.go:69] "Using iptables proxy"
	I0708 21:01:34.643158       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.163"]
	I0708 21:01:34.804167       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 21:01:34.804214       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 21:01:34.804231       1 server_linux.go:165] "Using iptables Proxier"
	I0708 21:01:34.813606       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 21:01:34.814275       1 server.go:872] "Version info" version="v1.30.2"
	I0708 21:01:34.814308       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 21:01:34.815750       1 config.go:192] "Starting service config controller"
	I0708 21:01:34.817663       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 21:01:34.817770       1 config.go:101] "Starting endpoint slice config controller"
	I0708 21:01:34.817777       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 21:01:34.818662       1 config.go:319] "Starting node config controller"
	I0708 21:01:34.818688       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 21:01:34.918202       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 21:01:34.927456       1 shared_informer.go:320] Caches are synced for service config
	I0708 21:01:34.927596       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b6924e8ced977682d418cea0d436ce49cf79ee382272cb973c8dce7ef6eed6b5] <==
	W0708 21:01:16.946878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 21:01:16.946907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 21:01:16.946998       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 21:01:16.947029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 21:01:16.947155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 21:01:16.947309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 21:01:16.947340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 21:01:16.947483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 21:01:16.947252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 21:01:16.947536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 21:01:17.859413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 21:01:17.859645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 21:01:18.026179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 21:01:18.026236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 21:01:18.138934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 21:01:18.138984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 21:01:18.192136       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 21:01:18.192205       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 21:01:18.208198       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 21:01:18.208266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 21:01:18.212926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 21:01:18.212985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 21:01:18.250513       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 21:01:18.250622       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0708 21:01:21.138971       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 21:08:19 default-k8s-diff-port-071971 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:08:19 default-k8s-diff-port-071971 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:08:19 default-k8s-diff-port-071971 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:08:19 default-k8s-diff-port-071971 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:08:34 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:08:34.690114    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:08:48 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:08:48.690093    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:08:59 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:08:59.690537    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:09:11 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:09:11.690433    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:09:19 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:09:19.746020    3902 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:09:19 default-k8s-diff-port-071971 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:09:19 default-k8s-diff-port-071971 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:09:19 default-k8s-diff-port-071971 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:09:19 default-k8s-diff-port-071971 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:09:22 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:09:22.689539    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:09:33 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:09:33.689822    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:09:48 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:09:48.689957    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:09:59 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:09:59.692191    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:10:14 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:10:14.690091    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:10:19 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:10:19.744398    3902 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:10:19 default-k8s-diff-port-071971 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:10:19 default-k8s-diff-port-071971 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:10:19 default-k8s-diff-port-071971 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:10:19 default-k8s-diff-port-071971 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:10:28 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:10:28.690140    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:10:40 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:10:40.690910    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	
	
	==> storage-provisioner [e4084256f6479f4d4d67c4cf0c6e045ed54a7e9d883968077655fa6a188e7e5a] <==
	I0708 21:01:35.591791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 21:01:35.623911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 21:01:35.624014       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 21:01:35.644823       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 21:01:35.645028       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071971_e56a7e45-5712-4549-80e8-7683024bf04c!
	I0708 21:01:35.652315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db7d01ea-b577-4a29-80ee-0b856bf5f5f1", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-071971_e56a7e45-5712-4549-80e8-7683024bf04c became leader
	I0708 21:01:35.745265       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071971_e56a7e45-5712-4549-80e8-7683024bf04c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-071971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-k8vhl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-071971 describe pod metrics-server-569cc877fc-k8vhl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-071971 describe pod metrics-server-569cc877fc-k8vhl: exit status 1 (62.714636ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-k8vhl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-071971 describe pod metrics-server-569cc877fc-k8vhl: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (340.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0708 21:06:29.733811   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0708 21:09:23.843664   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0708 21:11:29.733367   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914355 -n old-k8s-version-914355
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 2 (248.902414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-914355" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-914355 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-914355 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.556µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-914355 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 2 (235.157129ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-914355 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-914355 logs -n 25: (1.027572269s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-897827                                        | pause-897827                 | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:46 UTC |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:46 UTC | 08 Jul 24 20:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| ssh     | cert-options-059722 ssh                                | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-059722 -- sudo                         | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-059722                                 | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-028021             | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-914355             | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-239931            | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-733920 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-733920                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:50 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028021                  | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071971  | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-239931                 | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071971       | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC | 08 Jul 24 21:01 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 20:53:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 20:53:37.291760   59655 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:53:37.291847   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.291851   59655 out.go:304] Setting ErrFile to fd 2...
	I0708 20:53:37.291855   59655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:53:37.292047   59655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:53:37.292558   59655 out.go:298] Setting JSON to false
	I0708 20:53:37.293434   59655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5766,"bootTime":1720466251,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:53:37.293485   59655 start.go:139] virtualization: kvm guest
	I0708 20:53:37.296412   59655 out.go:177] * [default-k8s-diff-port-071971] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:53:37.297727   59655 notify.go:220] Checking for updates...
	I0708 20:53:37.297756   59655 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:53:37.299168   59655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:53:37.300541   59655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:53:37.301818   59655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:53:37.303117   59655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:53:37.304266   59655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:53:37.305793   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:53:37.306182   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.306236   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.321719   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I0708 20:53:37.322090   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.322593   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.322617   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.322908   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.323093   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.323329   59655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:53:37.323638   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:53:37.323679   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:53:37.338244   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0708 20:53:37.338660   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:53:37.339118   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:53:37.339144   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:53:37.339463   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:53:37.339735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:53:37.374356   59655 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 20:53:37.375714   59655 start.go:297] selected driver: kvm2
	I0708 20:53:37.375729   59655 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.375866   59655 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:53:37.376843   59655 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.376918   59655 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 20:53:37.391219   59655 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 20:53:37.391602   59655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 20:53:37.391659   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:53:37.391672   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:53:37.391707   59655 start.go:340] cluster config:
	{Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:53:37.391797   59655 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 20:53:37.393453   59655 out.go:177] * Starting "default-k8s-diff-port-071971" primary control-plane node in "default-k8s-diff-port-071971" cluster
	I0708 20:53:37.923695   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:40.995762   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:37.394736   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:53:37.394768   59655 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 20:53:37.394777   59655 cache.go:56] Caching tarball of preloaded images
	I0708 20:53:37.394849   59655 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 20:53:37.394860   59655 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 20:53:37.394962   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:53:37.395154   59655 start.go:360] acquireMachinesLock for default-k8s-diff-port-071971: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:53:47.075721   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:50.147727   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:56.227766   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:53:59.299738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:05.379699   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:08.451749   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:14.531759   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:17.603688   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:23.683730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:26.755738   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:32.835706   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:35.907702   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:41.987722   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:45.059873   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:51.139726   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:54:54.211797   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:00.291730   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:03.363720   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:09.443741   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:12.515718   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.358315   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:55:19.358408   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:55:19.359948   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:55:19.360000   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:55:19.360076   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:55:19.360217   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:55:19.360354   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:55:19.360443   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:55:19.362594   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:55:19.362671   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:55:19.362761   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:55:19.362915   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:55:19.362997   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:55:19.363087   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:55:19.363181   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:55:19.363271   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:55:19.363360   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:55:19.363470   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:55:19.363582   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:55:19.363636   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:55:19.363711   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:55:19.363781   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:55:19.363852   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:55:19.363941   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:55:19.364010   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:55:19.364135   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:55:19.364226   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:55:19.364276   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:55:19.364342   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:55:18.595786   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:19.366132   57466 out.go:204]   - Booting up control plane ...
	I0708 20:55:19.366219   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:55:19.366301   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:55:19.366364   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:55:19.366433   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:55:19.366579   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:55:19.366629   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:55:19.366692   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.366846   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.366909   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367070   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367133   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367285   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367344   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367511   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367575   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:55:19.367735   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:55:19.367743   57466 kubeadm.go:309] 
	I0708 20:55:19.367783   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:55:19.367817   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:55:19.367823   57466 kubeadm.go:309] 
	I0708 20:55:19.367851   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:55:19.367888   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:55:19.367991   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:55:19.368009   57466 kubeadm.go:309] 
	I0708 20:55:19.368127   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:55:19.368164   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:55:19.368192   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:55:19.368198   57466 kubeadm.go:309] 
	I0708 20:55:19.368284   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:55:19.368355   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:55:19.368362   57466 kubeadm.go:309] 
	I0708 20:55:19.368455   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:55:19.368539   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:55:19.368606   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:55:19.368666   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:55:19.368673   57466 kubeadm.go:309] 
	W0708 20:55:19.368784   57466 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0708 20:55:19.368831   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 20:55:19.838778   57466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:55:19.853958   57466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:55:19.863986   57466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:55:19.864010   57466 kubeadm.go:156] found existing configuration files:
	
	I0708 20:55:19.864055   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:55:19.873085   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:55:19.873147   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:55:19.882654   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:55:19.891579   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:55:19.891634   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:55:19.901397   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.910901   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:55:19.910976   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:55:19.920599   57466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:55:19.929826   57466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:55:19.929891   57466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:55:19.939284   57466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 20:55:20.153136   57466 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 20:55:21.667700   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:27.747756   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:30.819712   58678 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.108:22: connect: no route to host
	I0708 20:55:33.824320   59107 start.go:364] duration metric: took 3m48.54985296s to acquireMachinesLock for "embed-certs-239931"
	I0708 20:55:33.824375   59107 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:33.824390   59107 fix.go:54] fixHost starting: 
	I0708 20:55:33.824700   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:33.824728   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:33.839554   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0708 20:55:33.839987   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:33.840472   59107 main.go:141] libmachine: Using API Version  1
	I0708 20:55:33.840495   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:33.840844   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:33.841030   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:33.841194   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 20:55:33.842597   59107 fix.go:112] recreateIfNeeded on embed-certs-239931: state=Stopped err=<nil>
	I0708 20:55:33.842627   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	W0708 20:55:33.842787   59107 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:33.844574   59107 out.go:177] * Restarting existing kvm2 VM for "embed-certs-239931" ...
	I0708 20:55:33.845674   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Start
	I0708 20:55:33.845858   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring networks are active...
	I0708 20:55:33.846607   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network default is active
	I0708 20:55:33.846907   59107 main.go:141] libmachine: (embed-certs-239931) Ensuring network mk-embed-certs-239931 is active
	I0708 20:55:33.847329   59107 main.go:141] libmachine: (embed-certs-239931) Getting domain xml...
	I0708 20:55:33.848120   59107 main.go:141] libmachine: (embed-certs-239931) Creating domain...
	I0708 20:55:35.057523   59107 main.go:141] libmachine: (embed-certs-239931) Waiting to get IP...
	I0708 20:55:35.058300   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.058841   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.058870   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.058773   60083 retry.go:31] will retry after 280.969113ms: waiting for machine to come up
	I0708 20:55:33.821580   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:33.821617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.821932   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:55:33.821957   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:55:33.822166   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:55:33.824193   58678 machine.go:97] duration metric: took 4m37.421469682s to provisionDockerMachine
	I0708 20:55:33.824234   58678 fix.go:56] duration metric: took 4m37.444794791s for fixHost
	I0708 20:55:33.824241   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 4m37.44481517s
	W0708 20:55:33.824262   58678 start.go:713] error starting host: provision: host is not running
	W0708 20:55:33.824343   58678 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0708 20:55:33.824352   58678 start.go:728] Will try again in 5 seconds ...
	I0708 20:55:35.341327   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.341861   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.341882   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.341837   60083 retry.go:31] will retry after 333.972717ms: waiting for machine to come up
	I0708 20:55:35.677531   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:35.678035   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:35.678066   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:35.677984   60083 retry.go:31] will retry after 387.46652ms: waiting for machine to come up
	I0708 20:55:36.066618   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.067079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.067106   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.067044   60083 retry.go:31] will retry after 523.369614ms: waiting for machine to come up
	I0708 20:55:36.591863   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:36.592337   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:36.592363   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:36.592295   60083 retry.go:31] will retry after 670.675561ms: waiting for machine to come up
	I0708 20:55:37.264084   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:37.264521   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:37.264565   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:37.264485   60083 retry.go:31] will retry after 775.348922ms: waiting for machine to come up
	I0708 20:55:38.041398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:38.041860   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:38.041885   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:38.041801   60083 retry.go:31] will retry after 1.135585711s: waiting for machine to come up
	I0708 20:55:39.179405   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:39.179910   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:39.179938   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:39.179867   60083 retry.go:31] will retry after 1.422689354s: waiting for machine to come up
	I0708 20:55:38.826037   58678 start.go:360] acquireMachinesLock for no-preload-028021: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 20:55:40.603811   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:40.604240   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:40.604261   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:40.604199   60083 retry.go:31] will retry after 1.640612147s: waiting for machine to come up
	I0708 20:55:42.247230   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:42.247797   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:42.247837   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:42.247733   60083 retry.go:31] will retry after 2.031069729s: waiting for machine to come up
	I0708 20:55:44.280878   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:44.281419   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:44.281451   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:44.281355   60083 retry.go:31] will retry after 2.394813785s: waiting for machine to come up
	I0708 20:55:46.678897   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:46.679398   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:46.679430   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:46.679357   60083 retry.go:31] will retry after 2.419242459s: waiting for machine to come up
	I0708 20:55:49.100362   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:49.100901   59107 main.go:141] libmachine: (embed-certs-239931) DBG | unable to find current IP address of domain embed-certs-239931 in network mk-embed-certs-239931
	I0708 20:55:49.100964   59107 main.go:141] libmachine: (embed-certs-239931) DBG | I0708 20:55:49.100858   60083 retry.go:31] will retry after 4.241202363s: waiting for machine to come up
	I0708 20:55:54.868873   59655 start.go:364] duration metric: took 2m17.473689428s to acquireMachinesLock for "default-k8s-diff-port-071971"
	I0708 20:55:54.868970   59655 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:55:54.868991   59655 fix.go:54] fixHost starting: 
	I0708 20:55:54.869400   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:55:54.869432   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:55:54.888853   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44159
	I0708 20:55:54.889234   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:55:54.889674   59655 main.go:141] libmachine: Using API Version  1
	I0708 20:55:54.889698   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:55:54.890009   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:55:54.890196   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:55:54.890332   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 20:55:54.891932   59655 fix.go:112] recreateIfNeeded on default-k8s-diff-port-071971: state=Stopped err=<nil>
	I0708 20:55:54.891972   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	W0708 20:55:54.892120   59655 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:55:54.894439   59655 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071971" ...
	I0708 20:55:53.347154   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.347587   59107 main.go:141] libmachine: (embed-certs-239931) Found IP for machine: 192.168.61.126
	I0708 20:55:53.347601   59107 main.go:141] libmachine: (embed-certs-239931) Reserving static IP address...
	I0708 20:55:53.347612   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has current primary IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.348084   59107 main.go:141] libmachine: (embed-certs-239931) Reserved static IP address: 192.168.61.126
	I0708 20:55:53.348106   59107 main.go:141] libmachine: (embed-certs-239931) Waiting for SSH to be available...
	I0708 20:55:53.348119   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.348136   59107 main.go:141] libmachine: (embed-certs-239931) DBG | skip adding static IP to network mk-embed-certs-239931 - found existing host DHCP lease matching {name: "embed-certs-239931", mac: "52:54:00:b3:d9:ac", ip: "192.168.61.126"}
	I0708 20:55:53.348148   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Getting to WaitForSSH function...
	I0708 20:55:53.350167   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350545   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.350583   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.350651   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH client type: external
	I0708 20:55:53.350675   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa (-rw-------)
	I0708 20:55:53.350704   59107 main.go:141] libmachine: (embed-certs-239931) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:55:53.350722   59107 main.go:141] libmachine: (embed-certs-239931) DBG | About to run SSH command:
	I0708 20:55:53.350736   59107 main.go:141] libmachine: (embed-certs-239931) DBG | exit 0
	I0708 20:55:53.479934   59107 main.go:141] libmachine: (embed-certs-239931) DBG | SSH cmd err, output: <nil>: 
	I0708 20:55:53.480309   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetConfigRaw
	I0708 20:55:53.480891   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.483079   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483399   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.483424   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.483740   59107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/config.json ...
	I0708 20:55:53.483920   59107 machine.go:94] provisionDockerMachine start ...
	I0708 20:55:53.483937   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:53.484172   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.486461   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486772   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.486793   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.486921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.487075   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487253   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.487385   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.487556   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.487774   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.487786   59107 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:55:53.600047   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:55:53.600078   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600308   59107 buildroot.go:166] provisioning hostname "embed-certs-239931"
	I0708 20:55:53.600342   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.600508   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.603107   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603498   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.603529   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.603728   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.603954   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.604345   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.604512   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.604716   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.604737   59107 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-239931 && echo "embed-certs-239931" | sudo tee /etc/hostname
	I0708 20:55:53.734414   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-239931
	
	I0708 20:55:53.734457   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.737117   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737473   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.737501   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.737640   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:53.737852   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:53.738184   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:53.738355   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:53.738536   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:53.738558   59107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-239931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-239931/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-239931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:55:53.860753   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:55:53.860781   59107 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:55:53.860799   59107 buildroot.go:174] setting up certificates
	I0708 20:55:53.860808   59107 provision.go:84] configureAuth start
	I0708 20:55:53.860816   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetMachineName
	I0708 20:55:53.861070   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:53.863652   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.863999   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.864018   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.864221   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:53.866207   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866480   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:53.866504   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:53.866613   59107 provision.go:143] copyHostCerts
	I0708 20:55:53.866671   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:55:53.866680   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:55:53.866741   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:55:53.866837   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:55:53.866845   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:55:53.866868   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:55:53.866932   59107 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:55:53.866939   59107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:55:53.866959   59107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:55:53.867017   59107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.embed-certs-239931 san=[127.0.0.1 192.168.61.126 embed-certs-239931 localhost minikube]
	I0708 20:55:54.171765   59107 provision.go:177] copyRemoteCerts
	I0708 20:55:54.171835   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:55:54.171859   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.174341   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174621   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.174650   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.174762   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.174957   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.175129   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.175280   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.262051   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:55:54.287118   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:55:54.310071   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:55:54.337811   59107 provision.go:87] duration metric: took 476.990356ms to configureAuth
	I0708 20:55:54.337851   59107 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:55:54.338077   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:55:54.338147   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.340972   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341259   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.341296   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.341423   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.341720   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.341870   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.342006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.342147   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.342332   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.342350   59107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:55:54.618752   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:55:54.618775   59107 machine.go:97] duration metric: took 1.134844127s to provisionDockerMachine
	I0708 20:55:54.618786   59107 start.go:293] postStartSetup for "embed-certs-239931" (driver="kvm2")
	I0708 20:55:54.618795   59107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:55:54.618823   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.619220   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:55:54.619249   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.621857   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622144   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.622168   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.622348   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.622532   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.622703   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.622853   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.710096   59107 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:55:54.714437   59107 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:55:54.714458   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:55:54.714524   59107 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:55:54.714597   59107 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:55:54.714679   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:55:54.724350   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:55:54.748078   59107 start.go:296] duration metric: took 129.264358ms for postStartSetup
	I0708 20:55:54.748120   59107 fix.go:56] duration metric: took 20.923736253s for fixHost
	I0708 20:55:54.748138   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.750818   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751200   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.751227   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.751377   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.751611   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751763   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.751879   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.752034   59107 main.go:141] libmachine: Using SSH client type: native
	I0708 20:55:54.752240   59107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0708 20:55:54.752256   59107 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:55:54.868663   59107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472154.844724958
	
	I0708 20:55:54.868694   59107 fix.go:216] guest clock: 1720472154.844724958
	I0708 20:55:54.868706   59107 fix.go:229] Guest: 2024-07-08 20:55:54.844724958 +0000 UTC Remote: 2024-07-08 20:55:54.748123056 +0000 UTC m=+249.617599643 (delta=96.601902ms)
	I0708 20:55:54.868764   59107 fix.go:200] guest clock delta is within tolerance: 96.601902ms
	I0708 20:55:54.868776   59107 start.go:83] releasing machines lock for "embed-certs-239931", held for 21.044425411s
	I0708 20:55:54.868811   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.869092   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:54.871867   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872252   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.872295   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.872451   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.872921   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873060   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 20:55:54.873151   59107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:55:54.873196   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.873271   59107 ssh_runner.go:195] Run: cat /version.json
	I0708 20:55:54.873297   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 20:55:54.876118   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876142   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876614   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876641   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876682   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:54.876699   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:54.876851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.876903   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 20:55:54.877017   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877020   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 20:55:54.877193   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877266   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 20:55:54.877349   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.877424   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 20:55:54.984516   59107 ssh_runner.go:195] Run: systemctl --version
	I0708 20:55:54.990926   59107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:55:55.142010   59107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:55:55.148138   59107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:55:55.148203   59107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:55:55.164086   59107 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:55:55.164111   59107 start.go:494] detecting cgroup driver to use...
	I0708 20:55:55.164204   59107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:55:55.184836   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:55:55.204002   59107 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:55:55.204079   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:55:55.218405   59107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:55:55.233462   59107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:55:55.357278   59107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:55:55.521141   59107 docker.go:233] disabling docker service ...
	I0708 20:55:55.521218   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:55:55.538949   59107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:55:55.558613   59107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:55:55.696926   59107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:55:55.819810   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:55:55.837012   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:55:55.856417   59107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:55:55.856497   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.868488   59107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:55:55.868556   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.879503   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.891183   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.901872   59107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:55:55.914498   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.925676   59107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.944340   59107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:55:55.955961   59107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:55:55.965785   59107 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:55:55.965845   59107 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:55:55.979853   59107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:55:55.989331   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:55:56.108476   59107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:55:56.262396   59107 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:55:56.262463   59107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:55:56.267682   59107 start.go:562] Will wait 60s for crictl version
	I0708 20:55:56.267740   59107 ssh_runner.go:195] Run: which crictl
	I0708 20:55:56.273115   59107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:55:56.323276   59107 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:55:56.323364   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.352650   59107 ssh_runner.go:195] Run: crio --version
	I0708 20:55:56.394502   59107 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:55:54.895951   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Start
	I0708 20:55:54.896150   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring networks are active...
	I0708 20:55:54.896971   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network default is active
	I0708 20:55:54.897344   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Ensuring network mk-default-k8s-diff-port-071971 is active
	I0708 20:55:54.897672   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Getting domain xml...
	I0708 20:55:54.898340   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Creating domain...
	I0708 20:55:56.182187   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting to get IP...
	I0708 20:55:56.183209   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183699   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.183759   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.183663   60221 retry.go:31] will retry after 255.382138ms: waiting for machine to come up
	I0708 20:55:56.441290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441760   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.441789   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.441718   60221 retry.go:31] will retry after 363.116234ms: waiting for machine to come up
	I0708 20:55:56.806418   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806871   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:56.806899   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:56.806819   60221 retry.go:31] will retry after 392.319836ms: waiting for machine to come up
	I0708 20:55:57.200645   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.201176   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.201095   60221 retry.go:31] will retry after 528.490844ms: waiting for machine to come up
	I0708 20:55:56.395778   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetIP
	I0708 20:55:56.398458   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.398826   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 20:55:56.398853   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 20:55:56.399088   59107 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0708 20:55:56.403789   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:55:56.418081   59107 kubeadm.go:877] updating cluster {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:55:56.418244   59107 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:55:56.418312   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:55:56.459969   59107 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:55:56.460034   59107 ssh_runner.go:195] Run: which lz4
	I0708 20:55:56.464561   59107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:55:56.469087   59107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:55:56.469130   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:55:58.010716   59107 crio.go:462] duration metric: took 1.546186223s to copy over tarball
	I0708 20:55:58.010782   59107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:55:57.731640   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732172   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:57.732223   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:57.732129   60221 retry.go:31] will retry after 554.611559ms: waiting for machine to come up
	I0708 20:55:58.287924   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.288557   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.288491   60221 retry.go:31] will retry after 642.466107ms: waiting for machine to come up
	I0708 20:55:58.932485   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:58.933032   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:58.932958   60221 retry.go:31] will retry after 999.83146ms: waiting for machine to come up
	I0708 20:55:59.934050   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934618   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:55:59.934664   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:55:59.934571   60221 retry.go:31] will retry after 1.069868254s: waiting for machine to come up
	I0708 20:56:01.006049   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:01.006594   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:01.006506   60221 retry.go:31] will retry after 1.182777891s: waiting for machine to come up
	I0708 20:56:02.191001   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191460   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:02.191483   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:02.191418   60221 retry.go:31] will retry after 1.559742627s: waiting for machine to come up
	I0708 20:56:00.267199   59107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256392679s)
	I0708 20:56:00.267230   59107 crio.go:469] duration metric: took 2.256489175s to extract the tarball
	I0708 20:56:00.267240   59107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:00.305692   59107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:00.346669   59107 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:00.346694   59107 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:00.346703   59107 kubeadm.go:928] updating node { 192.168.61.126 8443 v1.30.2 crio true true} ...
	I0708 20:56:00.346804   59107 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-239931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:00.346868   59107 ssh_runner.go:195] Run: crio config
	I0708 20:56:00.392577   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:00.392597   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:00.392608   59107 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:00.392637   59107 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-239931 NodeName:embed-certs-239931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:00.392814   59107 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-239931"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:00.392894   59107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:00.403593   59107 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:00.403675   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:00.413449   59107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0708 20:56:00.430407   59107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:00.447599   59107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0708 20:56:00.465525   59107 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:00.469912   59107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:00.483255   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:00.623802   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:00.642946   59107 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931 for IP: 192.168.61.126
	I0708 20:56:00.642967   59107 certs.go:194] generating shared ca certs ...
	I0708 20:56:00.642982   59107 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:00.643143   59107 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:00.643184   59107 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:00.643193   59107 certs.go:256] generating profile certs ...
	I0708 20:56:00.643270   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/client.key
	I0708 20:56:00.643317   59107 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key.7743ab88
	I0708 20:56:00.643354   59107 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key
	I0708 20:56:00.643487   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:00.643524   59107 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:00.643533   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:00.643556   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:00.643579   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:00.643604   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:00.643670   59107 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:00.644353   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:00.699260   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:00.752536   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:00.783946   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:00.812524   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0708 20:56:00.843035   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:00.872061   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:00.898805   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/embed-certs-239931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 20:56:00.925402   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:00.952114   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:00.984067   59107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:01.010037   59107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:01.027599   59107 ssh_runner.go:195] Run: openssl version
	I0708 20:56:01.033942   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:01.046273   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051807   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.051887   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:01.058482   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:01.070774   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:01.083215   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088389   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.088460   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:01.094594   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:01.107360   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:01.119973   59107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125011   59107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.125085   59107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:01.131596   59107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:01.143993   59107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:01.149299   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:01.156201   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:01.162939   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:01.169874   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:01.176264   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:01.182905   59107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:01.189961   59107 kubeadm.go:391] StartCluster: {Name:embed-certs-239931 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-239931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:01.190041   59107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:01.190085   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.238097   59107 cri.go:89] found id: ""
	I0708 20:56:01.238167   59107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:01.250478   59107 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:01.250503   59107 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:01.250509   59107 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:01.250562   59107 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:01.261664   59107 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:01.262667   59107 kubeconfig.go:125] found "embed-certs-239931" server: "https://192.168.61.126:8443"
	I0708 20:56:01.264788   59107 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:01.275846   59107 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.126
	I0708 20:56:01.275888   59107 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:01.275908   59107 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:01.276006   59107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:01.318646   59107 cri.go:89] found id: ""
	I0708 20:56:01.318745   59107 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:01.340273   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:01.353325   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:01.353360   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:01.353412   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:01.363659   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:01.363732   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:01.374340   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:01.384284   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:01.384352   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:01.394981   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.405532   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:01.405604   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:01.416741   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:01.427724   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:01.427812   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:01.439352   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:01.451286   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:01.581829   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.013995   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.432133224s)
	I0708 20:56:03.014024   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.229195   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.305328   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:03.415409   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:03.415508   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:03.916187   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.416389   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:04.489450   59107 api_server.go:72] duration metric: took 1.074041899s to wait for apiserver process to appear ...
	I0708 20:56:04.489482   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:04.489516   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:04.490133   59107 api_server.go:269] stopped: https://192.168.61.126:8443/healthz: Get "https://192.168.61.126:8443/healthz": dial tcp 192.168.61.126:8443: connect: connection refused
	I0708 20:56:04.989698   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:03.753446   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.753998   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:03.754026   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:03.753954   60221 retry.go:31] will retry after 1.922949894s: waiting for machine to come up
	I0708 20:56:05.679244   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679831   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:05.679860   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:05.679794   60221 retry.go:31] will retry after 3.531627288s: waiting for machine to come up
	I0708 20:56:07.673375   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:07.673404   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:07.673420   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.776516   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.776551   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:07.989668   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:07.996865   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:07.996897   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.490538   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:08.496342   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:08.496374   59107 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:08.990583   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 20:56:09.001043   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 20:56:09.011126   59107 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:09.011160   59107 api_server.go:131] duration metric: took 4.521668725s to wait for apiserver health ...
	I0708 20:56:09.011171   59107 cni.go:84] Creating CNI manager for ""
	I0708 20:56:09.011179   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:09.012842   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:09.014197   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:09.041325   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:09.073110   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:09.086225   59107 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:09.086265   59107 system_pods.go:61] "coredns-7db6d8ff4d-wnqsl" [868e66bf-9f86-465f-aad1-d11a6d218ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:09.086276   59107 system_pods.go:61] "etcd-embed-certs-239931" [48815314-6e48-4fe0-b7b1-4a1d2f6610d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:09.086286   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [665311f4-d633-4b93-ae8c-2b43b45fff68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:09.086294   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [4ab6d657-8c74-491c-b965-ac68f2bd323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:09.086309   59107 system_pods.go:61] "kube-proxy-5h5xl" [9b169148-aa75-40a2-b08b-1d579ee179b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 20:56:09.086316   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [012399d8-10a4-407d-a899-3c840dd52ca8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:09.086331   59107 system_pods.go:61] "metrics-server-569cc877fc-h4btg" [c78cfc3c-159f-4a06-b4a0-63f8bd0a6703] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:09.086339   59107 system_pods.go:61] "storage-provisioner" [2ca0ea1d-5d1c-4e18-a871-e035a8946b3c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 20:56:09.086348   59107 system_pods.go:74] duration metric: took 13.216051ms to wait for pod list to return data ...
	I0708 20:56:09.086363   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:09.089689   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:09.089719   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:09.089732   59107 node_conditions.go:105] duration metric: took 3.363611ms to run NodePressure ...
	I0708 20:56:09.089751   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:09.377271   59107 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383148   59107 kubeadm.go:733] kubelet initialised
	I0708 20:56:09.383174   59107 kubeadm.go:734] duration metric: took 5.876526ms waiting for restarted kubelet to initialise ...
	I0708 20:56:09.383183   59107 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:09.388903   59107 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:09.214856   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215410   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | unable to find current IP address of domain default-k8s-diff-port-071971 in network mk-default-k8s-diff-port-071971
	I0708 20:56:09.215441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | I0708 20:56:09.215355   60221 retry.go:31] will retry after 3.64169465s: waiting for machine to come up
	I0708 20:56:14.180834   58678 start.go:364] duration metric: took 35.354748041s to acquireMachinesLock for "no-preload-028021"
	I0708 20:56:14.180893   58678 start.go:96] Skipping create...Using existing machine configuration
	I0708 20:56:14.180905   58678 fix.go:54] fixHost starting: 
	I0708 20:56:14.181259   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:56:14.181299   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:56:14.197712   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I0708 20:56:14.198157   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:56:14.198615   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:56:14.198637   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:56:14.198996   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:56:14.199173   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:14.199342   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:56:14.200905   58678 fix.go:112] recreateIfNeeded on no-preload-028021: state=Stopped err=<nil>
	I0708 20:56:14.200930   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	W0708 20:56:14.201103   58678 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 20:56:14.203062   58678 out.go:177] * Restarting existing kvm2 VM for "no-preload-028021" ...
	I0708 20:56:11.396453   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:13.396672   59107 pod_ready.go:102] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:12.860535   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.860988   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Found IP for machine: 192.168.72.163
	I0708 20:56:12.861010   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserving static IP address...
	I0708 20:56:12.861027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has current primary IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.861445   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.861473   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Reserved static IP address: 192.168.72.163
	I0708 20:56:12.861494   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | skip adding static IP to network mk-default-k8s-diff-port-071971 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071971", mac: "52:54:00:40:a7:be", ip: "192.168.72.163"}
	I0708 20:56:12.861515   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Getting to WaitForSSH function...
	I0708 20:56:12.861531   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Waiting for SSH to be available...
	I0708 20:56:12.864099   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864436   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.864465   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.864631   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH client type: external
	I0708 20:56:12.864663   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa (-rw-------)
	I0708 20:56:12.864693   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:12.864708   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | About to run SSH command:
	I0708 20:56:12.864721   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | exit 0
	I0708 20:56:12.996077   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:12.996459   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetConfigRaw
	I0708 20:56:12.997091   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:12.999431   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:12.999815   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:12.999844   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.000145   59655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/config.json ...
	I0708 20:56:13.000354   59655 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:13.000377   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.000558   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.002898   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003255   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.003290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.003444   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.003626   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003778   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.003930   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.004094   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.004297   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.004311   59655 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:13.119929   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:13.119956   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120203   59655 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071971"
	I0708 20:56:13.120320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.120544   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.123750   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.124256   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.124438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.124647   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.124993   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.125155   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.125339   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.125360   59655 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071971 && echo "default-k8s-diff-port-071971" | sudo tee /etc/hostname
	I0708 20:56:13.256165   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071971
	
	I0708 20:56:13.256199   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.258991   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259342   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.259376   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.259596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.259828   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260011   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.260149   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.260325   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.260506   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.260530   59655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071971/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:13.381593   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:13.381627   59655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:13.381684   59655 buildroot.go:174] setting up certificates
	I0708 20:56:13.381700   59655 provision.go:84] configureAuth start
	I0708 20:56:13.381716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetMachineName
	I0708 20:56:13.382023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:13.385065   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385358   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.385394   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.385566   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.387752   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388092   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.388132   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.388290   59655 provision.go:143] copyHostCerts
	I0708 20:56:13.388350   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:13.388361   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:13.388408   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:13.388506   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:13.388516   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:13.388536   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:13.388587   59655 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:13.388593   59655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:13.388610   59655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:13.389123   59655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071971 san=[127.0.0.1 192.168.72.163 default-k8s-diff-port-071971 localhost minikube]
	I0708 20:56:13.445451   59655 provision.go:177] copyRemoteCerts
	I0708 20:56:13.445509   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:13.445536   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.448926   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449291   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.449320   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.449579   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.449785   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.449944   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.450097   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:13.542311   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0708 20:56:13.570585   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 20:56:13.597943   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:13.623837   59655 provision.go:87] duration metric: took 242.102893ms to configureAuth
	I0708 20:56:13.623874   59655 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:13.624077   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:13.624144   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.626802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627247   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.627277   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.627553   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.627738   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.627910   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.628047   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.628214   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:13.628414   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:13.628442   59655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:13.930321   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:13.930349   59655 machine.go:97] duration metric: took 929.979999ms to provisionDockerMachine
	I0708 20:56:13.930361   59655 start.go:293] postStartSetup for "default-k8s-diff-port-071971" (driver="kvm2")
	I0708 20:56:13.930371   59655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:13.930385   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:13.930714   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:13.930747   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:13.933397   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933704   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:13.933735   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:13.933927   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:13.934119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:13.934266   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:13.934393   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.019603   59655 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:14.024556   59655 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:14.024589   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:14.024651   59655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:14.024744   59655 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:14.024836   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:14.035798   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:14.062351   59655 start.go:296] duration metric: took 131.974167ms for postStartSetup
	I0708 20:56:14.062402   59655 fix.go:56] duration metric: took 19.193418124s for fixHost
	I0708 20:56:14.062428   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.065264   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.065784   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.065822   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.066027   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.066271   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066441   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.066716   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.066965   59655 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:14.067197   59655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0708 20:56:14.067210   59655 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:14.180654   59655 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472174.151879540
	
	I0708 20:56:14.180683   59655 fix.go:216] guest clock: 1720472174.151879540
	I0708 20:56:14.180695   59655 fix.go:229] Guest: 2024-07-08 20:56:14.15187954 +0000 UTC Remote: 2024-07-08 20:56:14.062408788 +0000 UTC m=+156.804206336 (delta=89.470752ms)
	I0708 20:56:14.180751   59655 fix.go:200] guest clock delta is within tolerance: 89.470752ms
	I0708 20:56:14.180757   59655 start.go:83] releasing machines lock for "default-k8s-diff-port-071971", held for 19.311816598s
	I0708 20:56:14.180802   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.181119   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:14.183833   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184164   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.184194   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.184365   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.184862   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185029   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 20:56:14.185105   59655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:14.185152   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.185222   59655 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:14.185248   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 20:56:14.187788   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188002   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188135   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188167   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188290   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188299   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:14.188328   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:14.188501   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 20:56:14.188505   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188641   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.188715   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 20:56:14.188803   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.188885   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 20:56:14.189022   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 20:56:14.298253   59655 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:14.305004   59655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:14.457540   59655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:14.464497   59655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:14.464567   59655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:14.482063   59655 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:14.482093   59655 start.go:494] detecting cgroup driver to use...
	I0708 20:56:14.482172   59655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:14.500206   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:14.515905   59655 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:14.515952   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:14.532277   59655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:14.552772   59655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:14.686229   59655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:14.845428   59655 docker.go:233] disabling docker service ...
	I0708 20:56:14.845496   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:14.863157   59655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:14.881174   59655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:15.029269   59655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:15.165105   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:15.181619   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:15.202743   59655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:15.202806   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.215848   59655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:15.215925   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.228697   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.240964   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.257002   59655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:15.270309   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.283215   59655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.303235   59655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:15.322364   59655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:15.340757   59655 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:15.340836   59655 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:15.360592   59655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:15.372486   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:15.510210   59655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:15.656090   59655 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:15.656169   59655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:15.661847   59655 start.go:562] Will wait 60s for crictl version
	I0708 20:56:15.661917   59655 ssh_runner.go:195] Run: which crictl
	I0708 20:56:15.666004   59655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:15.707842   59655 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:15.707928   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.740434   59655 ssh_runner.go:195] Run: crio --version
	I0708 20:56:15.772450   59655 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:14.204596   58678 main.go:141] libmachine: (no-preload-028021) Calling .Start
	I0708 20:56:14.204780   58678 main.go:141] libmachine: (no-preload-028021) Ensuring networks are active...
	I0708 20:56:14.205463   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network default is active
	I0708 20:56:14.205799   58678 main.go:141] libmachine: (no-preload-028021) Ensuring network mk-no-preload-028021 is active
	I0708 20:56:14.206280   58678 main.go:141] libmachine: (no-preload-028021) Getting domain xml...
	I0708 20:56:14.207187   58678 main.go:141] libmachine: (no-preload-028021) Creating domain...
	I0708 20:56:15.514100   58678 main.go:141] libmachine: (no-preload-028021) Waiting to get IP...
	I0708 20:56:15.514946   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.515419   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.515473   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.515397   60369 retry.go:31] will retry after 282.59763ms: waiting for machine to come up
	I0708 20:56:15.799976   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:15.800525   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:15.800555   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:15.800482   60369 retry.go:31] will retry after 377.094067ms: waiting for machine to come up
	I0708 20:56:16.179257   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.179953   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.179979   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.179861   60369 retry.go:31] will retry after 433.953923ms: waiting for machine to come up
	I0708 20:56:15.773711   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetIP
	I0708 20:56:15.776947   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777368   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 20:56:15.777400   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 20:56:15.777704   59655 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:15.782466   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:15.796924   59655 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:15.797072   59655 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:15.797138   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:15.841838   59655 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:15.841922   59655 ssh_runner.go:195] Run: which lz4
	I0708 20:56:15.846443   59655 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 20:56:15.851267   59655 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 20:56:15.851302   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 20:56:15.397039   59107 pod_ready.go:92] pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:15.397070   59107 pod_ready.go:81] duration metric: took 6.008141421s for pod "coredns-7db6d8ff4d-wnqsl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:15.397082   59107 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405606   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.405638   59107 pod_ready.go:81] duration metric: took 2.008547358s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.405653   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411786   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:17.411810   59107 pod_ready.go:81] duration metric: took 6.14625ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:17.411822   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421681   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.421712   59107 pod_ready.go:81] duration metric: took 2.009879259s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.421725   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428235   59107 pod_ready.go:92] pod "kube-proxy-5h5xl" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.428260   59107 pod_ready.go:81] duration metric: took 6.527896ms for pod "kube-proxy-5h5xl" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.428269   59107 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433130   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:19.433154   59107 pod_ready.go:81] duration metric: took 4.87807ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:19.433163   59107 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:16.615670   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:16.616225   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:16.616257   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:16.616177   60369 retry.go:31] will retry after 489.658115ms: waiting for machine to come up
	I0708 20:56:17.107848   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.108391   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.108420   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.108341   60369 retry.go:31] will retry after 620.239043ms: waiting for machine to come up
	I0708 20:56:17.730239   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:17.730822   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:17.730854   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:17.730758   60369 retry.go:31] will retry after 818.379867ms: waiting for machine to come up
	I0708 20:56:18.550539   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:18.551024   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:18.551049   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:18.550993   60369 retry.go:31] will retry after 1.138596155s: waiting for machine to come up
	I0708 20:56:19.691669   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:19.692214   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:19.692267   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:19.692149   60369 retry.go:31] will retry after 1.467771065s: waiting for machine to come up
	I0708 20:56:21.161367   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:21.161916   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:21.161945   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:21.161854   60369 retry.go:31] will retry after 1.592022559s: waiting for machine to come up
	I0708 20:56:17.447251   59655 crio.go:462] duration metric: took 1.600850063s to copy over tarball
	I0708 20:56:17.447341   59655 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 20:56:19.773249   59655 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.325874804s)
	I0708 20:56:19.773277   59655 crio.go:469] duration metric: took 2.325993304s to extract the tarball
	I0708 20:56:19.773286   59655 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 20:56:19.811911   59655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:19.859029   59655 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 20:56:19.859060   59655 cache_images.go:84] Images are preloaded, skipping loading
	I0708 20:56:19.859070   59655 kubeadm.go:928] updating node { 192.168.72.163 8444 v1.30.2 crio true true} ...
	I0708 20:56:19.859208   59655 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-071971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:19.859281   59655 ssh_runner.go:195] Run: crio config
	I0708 20:56:19.905778   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:19.905806   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:19.905822   59655 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:19.905847   59655 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.163 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071971 NodeName:default-k8s-diff-port-071971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:19.906035   59655 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.163
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071971"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:19.906113   59655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:19.916307   59655 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:19.916388   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:19.926496   59655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0708 20:56:19.947778   59655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:19.969466   59655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0708 20:56:19.991103   59655 ssh_runner.go:195] Run: grep 192.168.72.163	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:19.995180   59655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:20.008005   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:20.143869   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:20.162694   59655 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971 for IP: 192.168.72.163
	I0708 20:56:20.162713   59655 certs.go:194] generating shared ca certs ...
	I0708 20:56:20.162745   59655 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:20.162930   59655 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:20.162986   59655 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:20.162997   59655 certs.go:256] generating profile certs ...
	I0708 20:56:20.163097   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.key
	I0708 20:56:20.163220   59655 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key.17bd30e8
	I0708 20:56:20.163259   59655 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key
	I0708 20:56:20.163394   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:20.163478   59655 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:20.163493   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:20.163524   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:20.163558   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:20.163594   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:20.163659   59655 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:20.164318   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:20.198987   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:20.251872   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:20.281444   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:20.305751   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0708 20:56:20.332608   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 20:56:20.365206   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:20.399631   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:20.430016   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:20.462126   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:20.492669   59655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:20.521867   59655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:20.540725   59655 ssh_runner.go:195] Run: openssl version
	I0708 20:56:20.546789   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:20.558515   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563342   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.563430   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:20.570039   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:20.585367   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:20.601217   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605930   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.605993   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:20.612015   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:20.623796   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:20.635305   59655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640571   59655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.640649   59655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:20.648600   59655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:20.663899   59655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:20.669383   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:20.675967   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:20.682513   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:20.690280   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:20.698720   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:20.705678   59655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:20.712524   59655 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-071971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-071971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:20.712643   59655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:20.712700   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.761032   59655 cri.go:89] found id: ""
	I0708 20:56:20.761107   59655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:20.772712   59655 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:20.772736   59655 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:20.772742   59655 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:20.772793   59655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:20.784860   59655 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:20.785974   59655 kubeconfig.go:125] found "default-k8s-diff-port-071971" server: "https://192.168.72.163:8444"
	I0708 20:56:20.788290   59655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:20.799889   59655 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.163
	I0708 20:56:20.799919   59655 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:20.799947   59655 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:20.800011   59655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:20.846864   59655 cri.go:89] found id: ""
	I0708 20:56:20.846936   59655 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:20.865883   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:20.877476   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:20.877495   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:20.877548   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 20:56:20.889786   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:20.889853   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:20.902185   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 20:56:20.913510   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:20.913573   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:20.923964   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.934048   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:20.934131   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:20.945078   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 20:56:20.955290   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:20.955354   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:20.966182   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:20.977508   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.319213   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:21.511204   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:23.942367   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:22.755738   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:22.756182   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:22.756243   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:22.756167   60369 retry.go:31] will retry after 1.858003233s: waiting for machine to come up
	I0708 20:56:24.616152   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:24.616674   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:24.616703   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:24.616618   60369 retry.go:31] will retry after 2.203640369s: waiting for machine to come up
	I0708 20:56:22.471504   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.152252924s)
	I0708 20:56:22.471539   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.692407   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.756884   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:22.892773   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:22.892888   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.393789   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.893298   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:23.941073   59655 api_server.go:72] duration metric: took 1.048301169s to wait for apiserver process to appear ...
	I0708 20:56:23.941100   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:23.941131   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.221991   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:56:27.222029   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:56:27.222048   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:26.441670   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:28.939138   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:27.353017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.353069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.442130   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.447304   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.447326   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:27.941979   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:27.951850   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:27.951878   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.441380   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.452031   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.452069   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:28.941613   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:28.946045   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:28.946084   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.441485   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.448847   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.448877   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:29.941906   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:29.946380   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:56:29.946416   59655 api_server.go:103] status: https://192.168.72.163:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:30.442013   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 20:56:30.447291   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 20:56:30.454664   59655 api_server.go:141] control plane version: v1.30.2
	I0708 20:56:30.454693   59655 api_server.go:131] duration metric: took 6.513586414s to wait for apiserver health ...
	I0708 20:56:30.454701   59655 cni.go:84] Creating CNI manager for ""
	I0708 20:56:30.454707   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:30.456577   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:56:26.821665   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:26.822266   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:26.822297   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:26.822209   60369 retry.go:31] will retry after 3.478824168s: waiting for machine to come up
	I0708 20:56:30.302329   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:30.302766   58678 main.go:141] libmachine: (no-preload-028021) DBG | unable to find current IP address of domain no-preload-028021 in network mk-no-preload-028021
	I0708 20:56:30.302796   58678 main.go:141] libmachine: (no-preload-028021) DBG | I0708 20:56:30.302707   60369 retry.go:31] will retry after 3.597512692s: waiting for machine to come up
	I0708 20:56:30.458168   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:56:30.469918   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:56:30.492348   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:56:30.503174   59655 system_pods.go:59] 8 kube-system pods found
	I0708 20:56:30.503210   59655 system_pods.go:61] "coredns-7db6d8ff4d-c4tzw" [e5ea7dde-1134-45d0-b3e2-176e6a8f068e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:56:30.503218   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [693fd668-83c2-43e6-bf43-7b1a9e654db0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:56:30.503226   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [eadde33a-b967-4a58-9730-d163e6b8c0c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:56:30.503233   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [99bd8e55-e0a9-4071-a0f0-dc9d1e79b58d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:56:30.503238   59655 system_pods.go:61] "kube-proxy-vq4l8" [e2a4779c-e8ed-4f5b-872b-d10604936178] Running
	I0708 20:56:30.503244   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [af6b0a79-be1e-4caa-86a6-47ac782ac438] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:56:30.503250   59655 system_pods.go:61] "metrics-server-569cc877fc-h2dzd" [7075aa8e-0716-4965-8a13-3ed804190b3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:56:30.503257   59655 system_pods.go:61] "storage-provisioner" [9fca5ac9-cd65-4257-b012-20ded80a39a5] Running
	I0708 20:56:30.503265   59655 system_pods.go:74] duration metric: took 10.887672ms to wait for pod list to return data ...
	I0708 20:56:30.503279   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:56:30.509137   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:56:30.509170   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 20:56:30.509189   59655 node_conditions.go:105] duration metric: took 5.903588ms to run NodePressure ...
	I0708 20:56:30.509210   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:30.780430   59655 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788138   59655 kubeadm.go:733] kubelet initialised
	I0708 20:56:30.788168   59655 kubeadm.go:734] duration metric: took 7.711132ms waiting for restarted kubelet to initialise ...
	I0708 20:56:30.788177   59655 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:56:30.797001   59655 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:30.939824   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:32.940860   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:34.941652   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:33.901849   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902332   58678 main.go:141] libmachine: (no-preload-028021) Found IP for machine: 192.168.39.108
	I0708 20:56:33.902356   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has current primary IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.902361   58678 main.go:141] libmachine: (no-preload-028021) Reserving static IP address...
	I0708 20:56:33.902766   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.902797   58678 main.go:141] libmachine: (no-preload-028021) DBG | skip adding static IP to network mk-no-preload-028021 - found existing host DHCP lease matching {name: "no-preload-028021", mac: "52:54:00:c5:5d:f8", ip: "192.168.39.108"}
	I0708 20:56:33.902808   58678 main.go:141] libmachine: (no-preload-028021) Reserved static IP address: 192.168.39.108
	I0708 20:56:33.902825   58678 main.go:141] libmachine: (no-preload-028021) Waiting for SSH to be available...
	I0708 20:56:33.902835   58678 main.go:141] libmachine: (no-preload-028021) DBG | Getting to WaitForSSH function...
	I0708 20:56:33.905031   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905318   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:33.905341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:33.905479   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH client type: external
	I0708 20:56:33.905509   58678 main.go:141] libmachine: (no-preload-028021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa (-rw-------)
	I0708 20:56:33.905535   58678 main.go:141] libmachine: (no-preload-028021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 20:56:33.905560   58678 main.go:141] libmachine: (no-preload-028021) DBG | About to run SSH command:
	I0708 20:56:33.905573   58678 main.go:141] libmachine: (no-preload-028021) DBG | exit 0
	I0708 20:56:34.035510   58678 main.go:141] libmachine: (no-preload-028021) DBG | SSH cmd err, output: <nil>: 
	I0708 20:56:34.035876   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetConfigRaw
	I0708 20:56:34.036501   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.039070   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039467   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.039496   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.039711   58678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/config.json ...
	I0708 20:56:34.039936   58678 machine.go:94] provisionDockerMachine start ...
	I0708 20:56:34.039955   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:34.040191   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.042269   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042640   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.042666   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.042793   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.042954   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043125   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.043292   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.043496   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.043662   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.043671   58678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 20:56:34.156092   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 20:56:34.156143   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156412   58678 buildroot.go:166] provisioning hostname "no-preload-028021"
	I0708 20:56:34.156441   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.156639   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.159015   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159420   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.159467   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.159606   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.159817   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160015   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.160214   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.160407   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.160572   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.160584   58678 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-028021 && echo "no-preload-028021" | sudo tee /etc/hostname
	I0708 20:56:34.286222   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-028021
	
	I0708 20:56:34.286250   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.289067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289376   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.289399   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.289617   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.289832   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.289991   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.290129   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.290310   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.290471   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.290485   58678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-028021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-028021/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-028021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 20:56:34.414724   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 20:56:34.414749   58678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 20:56:34.414790   58678 buildroot.go:174] setting up certificates
	I0708 20:56:34.414799   58678 provision.go:84] configureAuth start
	I0708 20:56:34.414808   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetMachineName
	I0708 20:56:34.415115   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:34.417919   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418241   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.418273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.418491   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.421129   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421603   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.421634   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.421756   58678 provision.go:143] copyHostCerts
	I0708 20:56:34.421818   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 20:56:34.421839   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 20:56:34.421906   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 20:56:34.422023   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 20:56:34.422034   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 20:56:34.422064   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 20:56:34.422151   58678 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 20:56:34.422161   58678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 20:56:34.422196   58678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 20:56:34.422276   58678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.no-preload-028021 san=[127.0.0.1 192.168.39.108 localhost minikube no-preload-028021]
	I0708 20:56:34.634189   58678 provision.go:177] copyRemoteCerts
	I0708 20:56:34.634253   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 20:56:34.634281   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.637123   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637364   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.637396   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.637609   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.637912   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.638172   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.638410   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:34.726761   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 20:56:34.751947   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0708 20:56:34.776165   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 20:56:34.803849   58678 provision.go:87] duration metric: took 389.036659ms to configureAuth
	I0708 20:56:34.803880   58678 buildroot.go:189] setting minikube options for container-runtime
	I0708 20:56:34.804125   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:56:34.804202   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:34.808559   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.808925   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:34.808966   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:34.809164   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:34.809416   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809572   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:34.809710   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:34.809874   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:34.810069   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:34.810097   58678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 20:56:35.096796   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 20:56:35.096822   58678 machine.go:97] duration metric: took 1.056870853s to provisionDockerMachine
	I0708 20:56:35.096834   58678 start.go:293] postStartSetup for "no-preload-028021" (driver="kvm2")
	I0708 20:56:35.096847   58678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 20:56:35.096864   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.097227   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 20:56:35.097266   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.100040   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100428   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.100449   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.100637   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.100826   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.100967   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.101128   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.187796   58678 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 20:56:35.192221   58678 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 20:56:35.192248   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 20:56:35.192315   58678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 20:56:35.192383   58678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 20:56:35.192467   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 20:56:35.204227   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:35.230404   58678 start.go:296] duration metric: took 133.555408ms for postStartSetup
	I0708 20:56:35.230446   58678 fix.go:56] duration metric: took 21.04954132s for fixHost
	I0708 20:56:35.230464   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.233341   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233654   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.233685   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.233878   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.234070   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234248   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.234413   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.234611   58678 main.go:141] libmachine: Using SSH client type: native
	I0708 20:56:35.234834   58678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0708 20:56:35.234849   58678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 20:56:35.348439   58678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720472195.300246165
	
	I0708 20:56:35.348459   58678 fix.go:216] guest clock: 1720472195.300246165
	I0708 20:56:35.348468   58678 fix.go:229] Guest: 2024-07-08 20:56:35.300246165 +0000 UTC Remote: 2024-07-08 20:56:35.230449891 +0000 UTC m=+338.995803708 (delta=69.796274ms)
	I0708 20:56:35.348487   58678 fix.go:200] guest clock delta is within tolerance: 69.796274ms
	I0708 20:56:35.348492   58678 start.go:83] releasing machines lock for "no-preload-028021", held for 21.167624903s
	I0708 20:56:35.348509   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.348752   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:35.351300   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351779   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.351806   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.351977   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352557   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352725   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:56:35.352799   58678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 20:56:35.352839   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.352942   58678 ssh_runner.go:195] Run: cat /version.json
	I0708 20:56:35.352969   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:56:35.355646   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356037   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356071   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356117   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356267   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356470   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.356555   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:35.356580   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:35.356642   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.356706   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:56:35.356770   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.356885   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:56:35.357020   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:56:35.357154   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:56:35.438344   58678 ssh_runner.go:195] Run: systemctl --version
	I0708 20:56:35.470518   58678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 20:56:35.628022   58678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 20:56:35.636390   58678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 20:56:35.636468   58678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 20:56:35.654729   58678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 20:56:35.654753   58678 start.go:494] detecting cgroup driver to use...
	I0708 20:56:35.654824   58678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 20:56:35.678564   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 20:56:35.697122   58678 docker.go:217] disabling cri-docker service (if available) ...
	I0708 20:56:35.697202   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 20:56:35.713388   58678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 20:56:35.728254   58678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 20:56:35.874433   58678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 20:56:36.062521   58678 docker.go:233] disabling docker service ...
	I0708 20:56:36.062614   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 20:56:36.081225   58678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 20:56:36.096855   58678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 20:56:36.229455   58678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 20:56:36.375525   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 20:56:36.390772   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 20:56:36.411762   58678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 20:56:36.411905   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.423149   58678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 20:56:36.423218   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.434145   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.447568   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.458758   58678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 20:56:36.469393   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.479663   58678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.501298   58678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 20:56:36.512407   58678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 20:56:36.522400   58678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 20:56:36.522469   58678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 20:56:36.536310   58678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 20:56:36.547955   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:36.680408   58678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 20:56:36.860344   58678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 20:56:36.860416   58678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 20:56:36.866153   58678 start.go:562] Will wait 60s for crictl version
	I0708 20:56:36.866221   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:36.871623   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 20:56:36.917564   58678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 20:56:36.917655   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.954595   58678 ssh_runner.go:195] Run: crio --version
	I0708 20:56:36.985788   58678 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 20:56:32.805051   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:35.303979   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.303556   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.303581   59655 pod_ready.go:81] duration metric: took 5.506548207s for pod "coredns-7db6d8ff4d-c4tzw" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.303590   59655 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308571   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.308596   59655 pod_ready.go:81] duration metric: took 4.998994ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.308610   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314379   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:36.314402   59655 pod_ready.go:81] duration metric: took 5.784289ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.314411   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:36.942775   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:39.440483   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:36.987568   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetIP
	I0708 20:56:36.990699   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991105   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:56:36.991146   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:56:36.991378   58678 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 20:56:36.996102   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:37.012228   58678 kubeadm.go:877] updating cluster {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 20:56:37.012390   58678 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 20:56:37.012439   58678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 20:56:37.050690   58678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 20:56:37.050715   58678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.2 registry.k8s.io/kube-controller-manager:v1.30.2 registry.k8s.io/kube-scheduler:v1.30.2 registry.k8s.io/kube-proxy:v1.30.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 20:56:37.050765   58678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.050988   58678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.051005   58678 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.051146   58678 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.051199   58678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.051323   58678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.051396   58678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.051560   58678 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.052826   58678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.052840   58678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.052853   58678 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0708 20:56:37.052910   58678 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.052742   58678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.052741   58678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.052744   58678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.237714   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.238720   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.246932   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0708 20:56:37.253938   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.256152   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.264291   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.304685   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.316620   58678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.2" does not exist at hash "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940" in container runtime
	I0708 20:56:37.316664   58678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.316710   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.352464   58678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.387003   58678 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0708 20:56:37.387039   58678 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.387078   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513840   58678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.2" does not exist at hash "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974" in container runtime
	I0708 20:56:37.513886   58678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.513925   58678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.2" does not exist at hash "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe" in container runtime
	I0708 20:56:37.513938   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.513958   58678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.513987   58678 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0708 20:56:37.514000   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514016   58678 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.514054   58678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.2" does not exist at hash "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772" in container runtime
	I0708 20:56:37.514097   58678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.514061   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514136   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514138   58678 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0708 20:56:37.514078   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.2
	I0708 20:56:37.514159   58678 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.514191   58678 ssh_runner.go:195] Run: which crictl
	I0708 20:56:37.514224   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0708 20:56:37.535635   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 20:56:37.535678   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.2
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0708 20:56:37.535744   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.2
	I0708 20:56:37.596995   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2
	I0708 20:56:37.597092   58678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:56:37.597102   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.651190   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0708 20:56:37.651320   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:37.695843   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0708 20:56:37.695944   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2
	I0708 20:56:37.695995   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.2 (exists)
	I0708 20:56:37.696018   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:37.696020   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.696052   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:37.695849   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2
	I0708 20:56:37.696071   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2
	I0708 20:56:37.695875   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0708 20:56:37.696117   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:37.696211   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:37.721410   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0708 20:56:37.721453   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.2 (exists)
	I0708 20:56:37.721536   58678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0708 20:56:37.721644   58678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:39.890974   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.19489331s)
	I0708 20:56:39.891017   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.2 (exists)
	I0708 20:56:39.891070   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.2: (2.194976871s)
	I0708 20:56:39.891096   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.2 from cache
	I0708 20:56:39.891107   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.194875907s)
	I0708 20:56:39.891117   58678 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891120   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0708 20:56:39.891156   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.2: (2.194966409s)
	I0708 20:56:39.891175   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0708 20:56:39.891184   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.2 (exists)
	I0708 20:56:39.891196   58678 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.169535432s)
	I0708 20:56:39.891212   58678 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0708 20:56:37.824606   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.824634   59655 pod_ready.go:81] duration metric: took 1.510214968s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.824646   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829963   59655 pod_ready.go:92] pod "kube-proxy-vq4l8" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:37.829988   59655 pod_ready.go:81] duration metric: took 5.334688ms for pod "kube-proxy-vq4l8" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:37.829997   59655 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338575   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 20:56:38.338611   59655 pod_ready.go:81] duration metric: took 508.60515ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:38.338625   59655 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	I0708 20:56:40.346498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.939773   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:43.941838   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:41.962256   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.071056184s)
	I0708 20:56:41.962281   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0708 20:56:41.962304   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:41.962349   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2
	I0708 20:56:44.325730   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.2: (2.363358371s)
	I0708 20:56:44.325760   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.2 from cache
	I0708 20:56:44.325789   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:44.325853   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2
	I0708 20:56:42.845177   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:44.846215   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.441086   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:48.939348   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:46.588882   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.2: (2.263001s)
	I0708 20:56:46.588909   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.2 from cache
	I0708 20:56:46.588931   58678 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:46.588980   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0708 20:56:50.590689   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.001689035s)
	I0708 20:56:50.590724   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0708 20:56:50.590758   58678 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:50.590813   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2
	I0708 20:56:47.345179   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:49.346736   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:51.846003   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:50.940095   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:53.441346   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:52.446198   58678 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.2: (1.855362154s)
	I0708 20:56:52.446229   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.2 from cache
	I0708 20:56:52.446247   58678 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:52.446284   58678 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0708 20:56:53.400379   58678 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19195-5988/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0708 20:56:53.400419   58678 cache_images.go:123] Successfully loaded all cached images
	I0708 20:56:53.400424   58678 cache_images.go:92] duration metric: took 16.349697925s to LoadCachedImages
	I0708 20:56:53.400436   58678 kubeadm.go:928] updating node { 192.168.39.108 8443 v1.30.2 crio true true} ...
	I0708 20:56:53.400599   58678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-028021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 20:56:53.400692   58678 ssh_runner.go:195] Run: crio config
	I0708 20:56:53.452091   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:56:53.452117   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:56:53.452131   58678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 20:56:53.452150   58678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.108 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-028021 NodeName:no-preload-028021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 20:56:53.452285   58678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-028021"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 20:56:53.452344   58678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 20:56:53.464447   58678 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 20:56:53.464522   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 20:56:53.474930   58678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0708 20:56:53.493701   58678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 20:56:53.511491   58678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0708 20:56:53.530848   58678 ssh_runner.go:195] Run: grep 192.168.39.108	control-plane.minikube.internal$ /etc/hosts
	I0708 20:56:53.534931   58678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 20:56:53.547590   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:56:53.658960   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:56:53.677127   58678 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021 for IP: 192.168.39.108
	I0708 20:56:53.677151   58678 certs.go:194] generating shared ca certs ...
	I0708 20:56:53.677169   58678 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:56:53.677296   58678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 20:56:53.677330   58678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 20:56:53.677338   58678 certs.go:256] generating profile certs ...
	I0708 20:56:53.677420   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.key
	I0708 20:56:53.677471   58678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key.c3084b2b
	I0708 20:56:53.677511   58678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key
	I0708 20:56:53.677613   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 20:56:53.677639   58678 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 20:56:53.677645   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 20:56:53.677677   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 20:56:53.677752   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 20:56:53.677785   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 20:56:53.677825   58678 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 20:56:53.680483   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 20:56:53.739386   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 20:56:53.770850   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 20:56:53.813958   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 20:56:53.850256   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0708 20:56:53.891539   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 20:56:53.921136   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 20:56:53.948966   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 20:56:53.977129   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 20:56:54.002324   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 20:56:54.028222   58678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 20:56:54.054099   58678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 20:56:54.073386   58678 ssh_runner.go:195] Run: openssl version
	I0708 20:56:54.079883   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 20:56:54.092980   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097451   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.097503   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 20:56:54.103507   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 20:56:54.115123   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 20:56:54.126757   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131534   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.131579   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 20:56:54.137333   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 20:56:54.148368   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 20:56:54.159628   58678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164230   58678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.164280   58678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 20:56:54.170068   58678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 20:56:54.182152   58678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 20:56:54.187146   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 20:56:54.193425   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 20:56:54.200491   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 20:56:54.207006   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 20:56:54.213285   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 20:56:54.220313   58678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 20:56:54.227497   58678 kubeadm.go:391] StartCluster: {Name:no-preload-028021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:no-preload-028021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 20:56:54.227597   58678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 20:56:54.227657   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.273025   58678 cri.go:89] found id: ""
	I0708 20:56:54.273094   58678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 20:56:54.284942   58678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 20:56:54.284965   58678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 20:56:54.284972   58678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 20:56:54.285023   58678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 20:56:54.296666   58678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:56:54.297740   58678 kubeconfig.go:125] found "no-preload-028021" server: "https://192.168.39.108:8443"
	I0708 20:56:54.299928   58678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 20:56:54.310186   58678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.108
	I0708 20:56:54.310224   58678 kubeadm.go:1154] stopping kube-system containers ...
	I0708 20:56:54.310235   58678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 20:56:54.310290   58678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 20:56:54.351640   58678 cri.go:89] found id: ""
	I0708 20:56:54.351709   58678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 20:56:54.370292   58678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 20:56:54.380551   58678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 20:56:54.380571   58678 kubeadm.go:156] found existing configuration files:
	
	I0708 20:56:54.380611   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 20:56:54.391462   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 20:56:54.391525   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 20:56:54.401804   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 20:56:54.411423   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 20:56:54.411501   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 20:56:54.422126   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.432236   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 20:56:54.432299   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 20:56:54.443001   58678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 20:56:54.454210   58678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 20:56:54.454271   58678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 20:56:54.465426   58678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 20:56:54.477714   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:54.593844   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.651092   58678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057214047s)
	I0708 20:56:55.651120   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.862342   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:55.952093   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:56:56.070164   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 20:56:56.070232   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:53.846869   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.847242   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:55.941645   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:58.440406   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:56:56.570644   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.071067   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:56:57.099879   58678 api_server.go:72] duration metric: took 1.02971362s to wait for apiserver process to appear ...
	I0708 20:56:57.099907   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 20:56:57.099932   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.102677   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.102805   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.102854   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.143035   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 20:57:00.143069   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 20:57:00.600694   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:00.605315   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:00.605349   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:01.100628   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.106209   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.106235   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:56:58.345619   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:00.346515   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:01.600656   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:01.605348   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:01.605381   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.101023   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.105457   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.105490   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:02.600058   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:02.604370   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:02.604397   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.100641   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.105655   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 20:57:03.105685   58678 api_server.go:103] status: https://192.168.39.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 20:57:03.600193   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 20:57:03.604714   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 20:57:03.617761   58678 api_server.go:141] control plane version: v1.30.2
	I0708 20:57:03.617795   58678 api_server.go:131] duration metric: took 6.517881236s to wait for apiserver health ...
	I0708 20:57:03.617805   58678 cni.go:84] Creating CNI manager for ""
	I0708 20:57:03.617811   58678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 20:57:03.619739   58678 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 20:57:00.940450   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.448484   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:03.621363   58678 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 20:57:03.635846   58678 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 20:57:03.667045   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 20:57:03.686236   58678 system_pods.go:59] 8 kube-system pods found
	I0708 20:57:03.686308   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 20:57:03.686322   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 20:57:03.686334   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 20:57:03.686348   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 20:57:03.686354   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 20:57:03.686363   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 20:57:03.686371   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 20:57:03.686379   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 20:57:03.686390   58678 system_pods.go:74] duration metric: took 19.320099ms to wait for pod list to return data ...
	I0708 20:57:03.686402   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 20:57:03.696401   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 20:57:03.696436   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 20:57:03.696449   58678 node_conditions.go:105] duration metric: took 10.038061ms to run NodePressure ...
	I0708 20:57:03.696474   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 20:57:03.981698   58678 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987357   58678 kubeadm.go:733] kubelet initialised
	I0708 20:57:03.987379   58678 kubeadm.go:734] duration metric: took 5.653044ms waiting for restarted kubelet to initialise ...
	I0708 20:57:03.987387   58678 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:03.993341   58678 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:03.999133   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999165   58678 pod_ready.go:81] duration metric: took 5.798521ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:03.999177   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:03.999188   58678 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.004640   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004666   58678 pod_ready.go:81] duration metric: took 5.471219ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.004676   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "etcd-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.004685   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.011313   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011342   58678 pod_ready.go:81] duration metric: took 6.65044ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.011354   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-apiserver-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.011364   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.071038   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071092   58678 pod_ready.go:81] duration metric: took 59.716762ms for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.071105   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.071114   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.470702   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470732   58678 pod_ready.go:81] duration metric: took 399.6044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.470743   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-proxy-6p6l6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.470753   58678 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:04.871002   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871036   58678 pod_ready.go:81] duration metric: took 400.275337ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:04.871045   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "kube-scheduler-no-preload-028021" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:04.871052   58678 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:05.270858   58678 pod_ready.go:97] node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270883   58678 pod_ready.go:81] duration metric: took 399.822389ms for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 20:57:05.270892   58678 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-028021" hosting pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:05.270899   58678 pod_ready.go:38] duration metric: took 1.283504929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:05.270914   58678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 20:57:05.284879   58678 ops.go:34] apiserver oom_adj: -16
	I0708 20:57:05.284900   58678 kubeadm.go:591] duration metric: took 10.999921787s to restartPrimaryControlPlane
	I0708 20:57:05.284912   58678 kubeadm.go:393] duration metric: took 11.057424996s to StartCluster
	I0708 20:57:05.284931   58678 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.285024   58678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:57:05.287297   58678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 20:57:05.287607   58678 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 20:57:05.287708   58678 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 20:57:05.287790   58678 addons.go:69] Setting storage-provisioner=true in profile "no-preload-028021"
	I0708 20:57:05.287807   58678 config.go:182] Loaded profile config "no-preload-028021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:57:05.287809   58678 addons.go:69] Setting default-storageclass=true in profile "no-preload-028021"
	I0708 20:57:05.287845   58678 addons.go:69] Setting metrics-server=true in profile "no-preload-028021"
	I0708 20:57:05.287900   58678 addons.go:234] Setting addon metrics-server=true in "no-preload-028021"
	W0708 20:57:05.287912   58678 addons.go:243] addon metrics-server should already be in state true
	I0708 20:57:05.287946   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.287854   58678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-028021"
	I0708 20:57:05.287825   58678 addons.go:234] Setting addon storage-provisioner=true in "no-preload-028021"
	W0708 20:57:05.288007   58678 addons.go:243] addon storage-provisioner should already be in state true
	I0708 20:57:05.288040   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.288276   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288308   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288380   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288382   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.288430   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.288413   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.289690   58678 out.go:177] * Verifying Kubernetes components...
	I0708 20:57:05.291336   58678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 20:57:05.310203   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0708 20:57:05.310610   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.311107   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.311129   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.311527   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.311990   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.312026   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.332966   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0708 20:57:05.332984   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0708 20:57:05.333056   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I0708 20:57:05.333449   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333466   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333497   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.333994   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334014   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334138   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334146   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.334158   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334163   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.334347   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334514   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.334640   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334683   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.334822   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.335285   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.335304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.337444   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.338763   58678 addons.go:234] Setting addon default-storageclass=true in "no-preload-028021"
	W0708 20:57:05.338785   58678 addons.go:243] addon default-storageclass should already be in state true
	I0708 20:57:05.338814   58678 host.go:66] Checking if "no-preload-028021" exists ...
	I0708 20:57:05.339217   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.339304   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.339800   58678 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 20:57:05.341280   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 20:57:05.341303   58678 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 20:57:05.341327   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.344838   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345488   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.345504   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.345683   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.345892   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.346146   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.346326   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.359060   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0708 20:57:05.359692   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.360186   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.360207   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.360545   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.361128   58678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:57:05.361173   58678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:57:05.361352   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0708 20:57:05.361971   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.362509   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.362525   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.362911   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.363148   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.364747   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.366914   58678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 20:57:05.368450   58678 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.368467   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 20:57:05.368483   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.372067   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372368   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.372387   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.372767   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.373030   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.373235   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.373389   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.379255   58678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0708 20:57:05.379732   58678 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:57:05.380405   58678 main.go:141] libmachine: Using API Version  1
	I0708 20:57:05.380428   58678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:57:05.380832   58678 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:57:05.381039   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetState
	I0708 20:57:05.382973   58678 main.go:141] libmachine: (no-preload-028021) Calling .DriverName
	I0708 20:57:05.383191   58678 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.383211   58678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 20:57:05.383231   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHHostname
	I0708 20:57:05.386273   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386682   58678 main.go:141] libmachine: (no-preload-028021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:5d:f8", ip: ""} in network mk-no-preload-028021: {Iface:virbr4 ExpiryTime:2024-07-08 21:56:25 +0000 UTC Type:0 Mac:52:54:00:c5:5d:f8 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:no-preload-028021 Clientid:01:52:54:00:c5:5d:f8}
	I0708 20:57:05.386705   58678 main.go:141] libmachine: (no-preload-028021) DBG | domain no-preload-028021 has defined IP address 192.168.39.108 and MAC address 52:54:00:c5:5d:f8 in network mk-no-preload-028021
	I0708 20:57:05.386997   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHPort
	I0708 20:57:05.387184   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHKeyPath
	I0708 20:57:05.387336   58678 main.go:141] libmachine: (no-preload-028021) Calling .GetSSHUsername
	I0708 20:57:05.387497   58678 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/no-preload-028021/id_rsa Username:docker}
	I0708 20:57:05.506081   58678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 20:57:05.525373   58678 node_ready.go:35] waiting up to 6m0s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:05.594638   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 20:57:05.594665   58678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 20:57:05.615378   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 20:57:05.620306   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 20:57:05.620331   58678 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 20:57:05.639840   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 20:57:05.692078   58678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:05.692109   58678 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 20:57:05.756364   58678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 20:57:06.822244   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206830336s)
	I0708 20:57:06.822310   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18243745s)
	I0708 20:57:06.822323   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822385   58678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065981271s)
	I0708 20:57:06.822418   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822432   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822390   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822351   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822504   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822850   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822870   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822879   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.822886   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.822955   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.822971   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822976   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.822993   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.822995   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823009   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823020   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823010   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.823051   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.823154   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823164   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.823366   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.823380   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.823390   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825436   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.825455   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.825465   58678 addons.go:475] Verifying addon metrics-server=true in "no-preload-028021"
	I0708 20:57:06.830088   58678 main.go:141] libmachine: Making call to close driver server
	I0708 20:57:06.830108   58678 main.go:141] libmachine: (no-preload-028021) Calling .Close
	I0708 20:57:06.830406   58678 main.go:141] libmachine: Successfully made call to close driver server
	I0708 20:57:06.830420   58678 main.go:141] libmachine: (no-preload-028021) DBG | Closing plugin on server side
	I0708 20:57:06.830423   58678 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 20:57:06.832322   58678 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0708 20:57:02.845629   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.353827   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:05.940469   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:08.439911   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:06.833974   58678 addons.go:510] duration metric: took 1.546270475s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0708 20:57:07.529328   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:09.529406   58678 node_ready.go:53] node "no-preload-028021" has status "Ready":"False"
	I0708 20:57:11.030134   58678 node_ready.go:49] node "no-preload-028021" has status "Ready":"True"
	I0708 20:57:11.030162   58678 node_ready.go:38] duration metric: took 5.504751555s for node "no-preload-028021" to be "Ready" ...
	I0708 20:57:11.030174   58678 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 20:57:11.035309   58678 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039750   58678 pod_ready.go:92] pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.039772   58678 pod_ready.go:81] duration metric: took 4.436756ms for pod "coredns-7db6d8ff4d-bb6cr" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.039783   58678 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044726   58678 pod_ready.go:92] pod "etcd-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.044748   58678 pod_ready.go:81] duration metric: took 4.958058ms for pod "etcd-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.044756   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049083   58678 pod_ready.go:92] pod "kube-apiserver-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:11.049104   58678 pod_ready.go:81] duration metric: took 4.34014ms for pod "kube-apiserver-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:11.049115   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:07.846290   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.344964   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:10.939618   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.445191   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:13.056307   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.056817   58678 pod_ready.go:102] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.063838   58678 pod_ready.go:92] pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.063864   58678 pod_ready.go:81] duration metric: took 5.014740635s for pod "kube-controller-manager-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.063875   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082486   58678 pod_ready.go:92] pod "kube-proxy-6p6l6" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.082529   58678 pod_ready.go:81] duration metric: took 18.642044ms for pod "kube-proxy-6p6l6" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.082545   58678 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092312   58678 pod_ready.go:92] pod "kube-scheduler-no-preload-028021" in "kube-system" namespace has status "Ready":"True"
	I0708 20:57:16.092337   58678 pod_ready.go:81] duration metric: took 9.783638ms for pod "kube-scheduler-no-preload-028021" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.092347   58678 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	I0708 20:57:16.353120   57466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0708 20:57:16.353203   57466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0708 20:57:16.355269   57466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0708 20:57:16.355317   57466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 20:57:16.355404   57466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 20:57:16.355558   57466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 20:57:16.355708   57466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 20:57:16.355815   57466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 20:57:16.358151   57466 out.go:204]   - Generating certificates and keys ...
	I0708 20:57:16.358312   57466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 20:57:16.358411   57466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 20:57:16.358531   57466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 20:57:16.358641   57466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 20:57:16.358732   57466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 20:57:16.358798   57466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 20:57:16.358893   57466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 20:57:16.359004   57466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 20:57:16.359128   57466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 20:57:16.359209   57466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 20:57:16.359288   57466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 20:57:16.359384   57466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 20:57:16.359509   57466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 20:57:16.359614   57466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 20:57:16.359725   57466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 20:57:16.359794   57466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 20:57:16.359881   57466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 20:57:16.359963   57466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 20:57:16.360002   57466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 20:57:16.360099   57466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 20:57:16.361960   57466 out.go:204]   - Booting up control plane ...
	I0708 20:57:16.362053   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 20:57:16.362196   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 20:57:16.362283   57466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 20:57:16.362402   57466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 20:57:16.362589   57466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 20:57:16.362819   57466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0708 20:57:16.362930   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363170   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363242   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363473   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363580   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.363786   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.363873   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364093   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364247   57466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0708 20:57:16.364435   57466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0708 20:57:16.364445   57466 kubeadm.go:309] 
	I0708 20:57:16.364476   57466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0708 20:57:16.364533   57466 kubeadm.go:309] 		timed out waiting for the condition
	I0708 20:57:16.364541   57466 kubeadm.go:309] 
	I0708 20:57:16.364601   57466 kubeadm.go:309] 	This error is likely caused by:
	I0708 20:57:16.364636   57466 kubeadm.go:309] 		- The kubelet is not running
	I0708 20:57:16.364796   57466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0708 20:57:16.364820   57466 kubeadm.go:309] 
	I0708 20:57:16.364958   57466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0708 20:57:16.365016   57466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0708 20:57:16.365057   57466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0708 20:57:16.365063   57466 kubeadm.go:309] 
	I0708 20:57:16.365208   57466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0708 20:57:16.365339   57466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0708 20:57:16.365356   57466 kubeadm.go:309] 
	I0708 20:57:16.365490   57466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0708 20:57:16.365589   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0708 20:57:16.365694   57466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0708 20:57:16.365869   57466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0708 20:57:16.365969   57466 kubeadm.go:309] 
	I0708 20:57:16.365972   57466 kubeadm.go:393] duration metric: took 7m56.670441698s to StartCluster
	I0708 20:57:16.366023   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 20:57:16.366090   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 20:57:16.435868   57466 cri.go:89] found id: ""
	I0708 20:57:16.435896   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.435904   57466 logs.go:278] No container was found matching "kube-apiserver"
	I0708 20:57:16.435910   57466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 20:57:16.435969   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 20:57:16.478844   57466 cri.go:89] found id: ""
	I0708 20:57:16.478881   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.478896   57466 logs.go:278] No container was found matching "etcd"
	I0708 20:57:16.478904   57466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 20:57:16.478974   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 20:57:16.517414   57466 cri.go:89] found id: ""
	I0708 20:57:16.517439   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.517448   57466 logs.go:278] No container was found matching "coredns"
	I0708 20:57:16.517455   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 20:57:16.517516   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 20:57:16.557036   57466 cri.go:89] found id: ""
	I0708 20:57:16.557063   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.557074   57466 logs.go:278] No container was found matching "kube-scheduler"
	I0708 20:57:16.557081   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 20:57:16.557153   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 20:57:16.593604   57466 cri.go:89] found id: ""
	I0708 20:57:16.593631   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.593641   57466 logs.go:278] No container was found matching "kube-proxy"
	I0708 20:57:16.593648   57466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 20:57:16.593704   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 20:57:16.634143   57466 cri.go:89] found id: ""
	I0708 20:57:16.634173   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.634183   57466 logs.go:278] No container was found matching "kube-controller-manager"
	I0708 20:57:16.634190   57466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 20:57:16.634248   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 20:57:16.676553   57466 cri.go:89] found id: ""
	I0708 20:57:16.676585   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.676595   57466 logs.go:278] No container was found matching "kindnet"
	I0708 20:57:16.676602   57466 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 20:57:16.676663   57466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 20:57:16.715652   57466 cri.go:89] found id: ""
	I0708 20:57:16.715674   57466 logs.go:276] 0 containers: []
	W0708 20:57:16.715682   57466 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0708 20:57:16.715692   57466 logs.go:123] Gathering logs for dmesg ...
	I0708 20:57:16.715703   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 20:57:16.730747   57466 logs.go:123] Gathering logs for describe nodes ...
	I0708 20:57:16.730776   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0708 20:57:16.814950   57466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0708 20:57:16.814976   57466 logs.go:123] Gathering logs for CRI-O ...
	I0708 20:57:16.815005   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 20:57:16.921144   57466 logs.go:123] Gathering logs for container status ...
	I0708 20:57:16.921194   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 20:57:16.973261   57466 logs.go:123] Gathering logs for kubelet ...
	I0708 20:57:16.973294   57466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 20:57:17.031242   57466 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0708 20:57:17.031307   57466 out.go:239] * 
	W0708 20:57:17.031362   57466 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.031389   57466 out.go:239] * 
	W0708 20:57:17.032214   57466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 20:57:17.035847   57466 out.go:177] 
	W0708 20:57:17.037198   57466 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 20:57:17.037247   57466 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0708 20:57:17.037274   57466 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0708 20:57:17.039077   57466 out.go:177] 
	I0708 20:57:12.345241   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:14.346235   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:16.347467   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:15.940334   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:17.943302   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:18.102691   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:20.599066   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:18.847908   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:21.345112   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:20.441347   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:22.939786   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:24.940449   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:22.600192   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:25.100175   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:23.346438   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:25.845181   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.439923   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:29.940540   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.600010   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:30.099104   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:27.845456   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:29.845526   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.440285   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.939729   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.101616   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.598135   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:32.345268   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:34.844782   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.845440   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.940110   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:38.940964   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:36.600034   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:39.099711   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.100745   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:38.847223   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.344382   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:41.441047   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.939510   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.599982   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:46.101913   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:43.345029   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:45.345390   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:45.939787   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:47.940956   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:49.941949   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:48.598871   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:50.600154   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:47.346271   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:49.346661   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:51.844897   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:52.439646   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:54.440569   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:52.604096   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:55.103841   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:54.345832   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:56.845398   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:56.440640   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:58.939537   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:57.598505   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:00.098797   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:57:58.848087   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:01.346566   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:00.940434   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:03.439927   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:02.602188   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:05.100284   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:03.848841   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:06.346912   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:05.441676   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:07.942369   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:07.599099   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:09.601188   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:08.848926   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:11.346458   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:10.439620   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:12.440274   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:14.939694   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:12.098918   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:14.099419   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:13.844947   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:15.845203   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:16.940812   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:18.941307   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:16.599322   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:19.098815   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:21.100160   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:17.845975   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:20.347071   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:21.439802   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:23.441183   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:23.598459   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:26.098717   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:22.844674   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:24.845210   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:26.848564   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:25.939783   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:28.439490   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:28.099236   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:30.599130   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:29.344306   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:31.345070   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:30.439832   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:32.440229   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:34.441525   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:32.600143   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:35.100068   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:33.345938   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:35.845421   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:36.939642   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:38.941263   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:37.599587   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:40.099121   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:37.845529   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:40.345830   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:41.441175   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:43.941076   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:42.099418   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:44.101452   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:42.844426   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:44.846831   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:45.941732   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:48.440398   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:46.599328   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:48.600055   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:51.099949   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:47.347094   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:49.846223   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:50.940172   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:52.940229   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:54.941034   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:53.100619   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:55.599681   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:52.347726   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:54.845461   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:56.846142   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:56.941957   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.439408   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:57.600406   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.600450   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:58:59.344802   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:01.345852   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:01.939259   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:03.940182   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:02.101218   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:04.600651   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:03.845810   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:05.846170   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:05.940757   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:08.439635   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:07.100571   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:09.100718   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:08.344894   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:10.346744   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:10.440413   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:12.440882   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:14.940151   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:11.601260   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:13.603589   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:16.112928   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:12.848135   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:15.346591   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:17.440326   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:19.440421   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:18.598791   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:20.600589   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:17.845413   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:19.849057   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:21.941414   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:24.441214   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:23.100854   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:25.599374   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:22.346925   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:24.845239   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:26.941311   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:28.948332   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:28.100928   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:30.600465   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:27.345835   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:29.846655   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:31.848193   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:31.440572   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:33.939354   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:33.100068   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:35.601159   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:34.345252   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:36.346479   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:35.939843   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:37.941381   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:38.100393   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.102157   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:38.844435   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.845328   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:40.438849   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:42.441256   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:44.442877   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:42.601119   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:45.101132   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:43.345149   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:45.345522   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:46.940287   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:48.941589   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:47.101717   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:49.598367   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:47.846030   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:49.846247   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:51.438745   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:53.441587   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:51.599309   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:54.105369   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:56.110085   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:52.347026   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:54.845971   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:55.939702   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:57.940731   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:58.598821   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:00.599435   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:57.345043   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 20:59:59.346796   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:01.347030   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:00.439467   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:02.443994   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:04.941721   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:02.599994   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:05.098379   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:03.845802   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:05.846016   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:07.439561   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:09.440326   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:07.099339   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:09.599746   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:08.345432   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:10.347888   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:11.940331   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:13.940496   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:12.100751   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:14.597860   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:12.349653   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:14.846452   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:16.440554   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:18.441219   59107 pod_ready.go:102] pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:19.434076   59107 pod_ready.go:81] duration metric: took 4m0.000896796s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" ...
	E0708 21:00:19.434112   59107 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-h4btg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0708 21:00:19.434131   59107 pod_ready.go:38] duration metric: took 4m10.050938227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:00:19.434157   59107 kubeadm.go:591] duration metric: took 4m18.183643708s to restartPrimaryControlPlane
	W0708 21:00:19.434219   59107 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 21:00:19.434258   59107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:00:16.598896   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:18.598974   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:20.599027   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:17.345157   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:19.345498   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:21.346939   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:22.599140   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:24.600455   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:23.347325   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:25.846384   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:27.104536   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:29.598836   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:27.847635   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:30.345065   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:31.600246   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:34.099964   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:32.348256   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:34.846942   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:36.598075   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:38.599175   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:40.599720   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:37.345319   59655 pod_ready.go:102] pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:38.339580   59655 pod_ready.go:81] duration metric: took 4m0.000925316s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" ...
	E0708 21:00:38.339615   59655 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-h2dzd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0708 21:00:38.339635   59655 pod_ready.go:38] duration metric: took 4m7.551446129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:00:38.339667   59655 kubeadm.go:591] duration metric: took 4m17.566917749s to restartPrimaryControlPlane
	W0708 21:00:38.339731   59655 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 21:00:38.339763   59655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0708 21:00:43.101768   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:45.102321   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:47.599770   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:50.100703   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:51.419295   59107 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.985013246s)
	I0708 21:00:51.419373   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:00:51.438876   59107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:00:51.451558   59107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:00:51.463932   59107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:00:51.463959   59107 kubeadm.go:156] found existing configuration files:
	
	I0708 21:00:51.464013   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 21:00:51.476729   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:00:51.476791   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:00:51.488357   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 21:00:51.499650   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:00:51.499720   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:00:51.510559   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 21:00:51.522747   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:00:51.522821   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:00:51.534156   59107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 21:00:51.545057   59107 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:00:51.545123   59107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:00:51.556712   59107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:00:51.766960   59107 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:00:52.599619   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:00:55.102565   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:01.185862   59107 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:01:01.185936   59107 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:01:01.186061   59107 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:01:01.186246   59107 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:01:01.186375   59107 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:01:01.186477   59107 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:01:01.188387   59107 out.go:204]   - Generating certificates and keys ...
	I0708 21:01:01.188489   59107 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:01:01.188575   59107 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:01:01.188655   59107 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:01:01.188754   59107 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:01:01.188856   59107 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:01:01.188937   59107 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:01:01.189015   59107 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:01:01.189107   59107 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:01:01.189216   59107 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:01:01.189326   59107 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:01:01.189381   59107 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:01:01.189445   59107 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:01:01.189504   59107 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:01:01.189571   59107 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:01:01.189636   59107 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:01:01.189732   59107 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:01:01.189822   59107 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:01:01.189939   59107 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:01:01.190019   59107 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:01:01.192426   59107 out.go:204]   - Booting up control plane ...
	I0708 21:01:01.192527   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:01:01.192598   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:01:01.192674   59107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:01:01.192795   59107 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:01:01.192892   59107 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:01:01.192949   59107 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:01:01.193078   59107 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:01:01.193150   59107 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:01:01.193204   59107 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001227366s
	I0708 21:01:01.193274   59107 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:01:01.193329   59107 kubeadm.go:309] [api-check] The API server is healthy after 5.506719576s
	I0708 21:01:01.193428   59107 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 21:01:01.193574   59107 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 21:01:01.193655   59107 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 21:01:01.193854   59107 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-239931 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 21:01:01.193936   59107 kubeadm.go:309] [bootstrap-token] Using token: uu1yg0.6mx8u39sjlxfysca
	I0708 21:01:01.196508   59107 out.go:204]   - Configuring RBAC rules ...
	I0708 21:01:01.196638   59107 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 21:01:01.196748   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 21:01:01.196867   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 21:01:01.196978   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 21:01:01.197141   59107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 21:01:01.197217   59107 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 21:01:01.197316   59107 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 21:01:01.197355   59107 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 21:01:01.197397   59107 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 21:01:01.197403   59107 kubeadm.go:309] 
	I0708 21:01:01.197451   59107 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 21:01:01.197457   59107 kubeadm.go:309] 
	I0708 21:01:01.197542   59107 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 21:01:01.197555   59107 kubeadm.go:309] 
	I0708 21:01:01.197597   59107 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 21:01:01.197673   59107 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 21:01:01.197748   59107 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 21:01:01.197761   59107 kubeadm.go:309] 
	I0708 21:01:01.197850   59107 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 21:01:01.197860   59107 kubeadm.go:309] 
	I0708 21:01:01.197903   59107 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 21:01:01.197912   59107 kubeadm.go:309] 
	I0708 21:01:01.197971   59107 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 21:01:01.198059   59107 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 21:01:01.198155   59107 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 21:01:01.198165   59107 kubeadm.go:309] 
	I0708 21:01:01.198279   59107 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 21:01:01.198389   59107 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 21:01:01.198400   59107 kubeadm.go:309] 
	I0708 21:01:01.198515   59107 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token uu1yg0.6mx8u39sjlxfysca \
	I0708 21:01:01.198663   59107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 21:01:01.198697   59107 kubeadm.go:309] 	--control-plane 
	I0708 21:01:01.198706   59107 kubeadm.go:309] 
	I0708 21:01:01.198821   59107 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 21:01:01.198830   59107 kubeadm.go:309] 
	I0708 21:01:01.198942   59107 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token uu1yg0.6mx8u39sjlxfysca \
	I0708 21:01:01.199078   59107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 21:01:01.199095   59107 cni.go:84] Creating CNI manager for ""
	I0708 21:01:01.199104   59107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:01:01.201409   59107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 21:00:57.600428   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:00.101501   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:01.202540   59107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 21:01:01.214691   59107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 21:01:01.238039   59107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 21:01:01.238180   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:01.238204   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-239931 minikube.k8s.io/updated_at=2024_07_08T21_01_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=embed-certs-239931 minikube.k8s.io/primary=true
	I0708 21:01:01.255228   59107 ops.go:34] apiserver oom_adj: -16
	I0708 21:01:01.441736   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:01.942570   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.442775   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.941941   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:03.441910   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:03.942762   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:04.442791   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:04.942122   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:02.600102   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:04.601357   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:05.442031   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:05.942414   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:06.442353   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:06.942075   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:07.442007   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:07.941952   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:08.442578   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:08.942110   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:09.442438   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:09.942436   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:10.666697   59655 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.326909913s)
	I0708 21:01:10.666766   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:10.684044   59655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:01:10.695291   59655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:01:10.705771   59655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:01:10.705790   59655 kubeadm.go:156] found existing configuration files:
	
	I0708 21:01:10.705829   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0708 21:01:10.717858   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:01:10.717911   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:01:10.728721   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0708 21:01:10.738917   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:01:10.738985   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:01:10.749795   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0708 21:01:10.760976   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:01:10.761036   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:01:10.771625   59655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0708 21:01:10.781677   59655 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:01:10.781738   59655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:01:10.791622   59655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:01:10.855152   59655 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:01:10.855246   59655 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:01:11.027005   59655 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:01:11.027132   59655 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:01:11.027245   59655 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:01:11.262898   59655 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:01:07.098267   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:09.099083   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:11.099398   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:11.264777   59655 out.go:204]   - Generating certificates and keys ...
	I0708 21:01:11.264897   59655 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:01:11.265011   59655 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:01:11.265143   59655 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 21:01:11.265245   59655 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 21:01:11.265331   59655 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 21:01:11.265412   59655 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 21:01:11.265516   59655 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 21:01:11.265601   59655 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 21:01:11.265692   59655 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 21:01:11.265806   59655 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 21:01:11.265883   59655 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 21:01:11.265979   59655 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:01:11.307094   59655 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:01:11.410219   59655 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:01:11.840751   59655 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:01:12.163906   59655 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:01:12.260797   59655 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:01:12.261513   59655 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:01:12.264128   59655 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 21:01:12.266095   59655 out.go:204]   - Booting up control plane ...
	I0708 21:01:12.266212   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 21:01:12.266301   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 21:01:12.267540   59655 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 21:01:12.290823   59655 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 21:01:12.291578   59655 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 21:01:12.291693   59655 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 21:01:10.442308   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:10.942270   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:11.442233   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:11.942533   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:12.442040   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:12.942629   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:13.441853   59107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:13.565655   59107 kubeadm.go:1107] duration metric: took 12.327535547s to wait for elevateKubeSystemPrivileges
	W0708 21:01:13.565704   59107 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 21:01:13.565714   59107 kubeadm.go:393] duration metric: took 5m12.375759038s to StartCluster
	I0708 21:01:13.565736   59107 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:13.565845   59107 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:01:13.568610   59107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:13.568940   59107 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:01:13.568980   59107 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 21:01:13.569061   59107 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-239931"
	I0708 21:01:13.569098   59107 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-239931"
	W0708 21:01:13.569113   59107 addons.go:243] addon storage-provisioner should already be in state true
	I0708 21:01:13.569136   59107 addons.go:69] Setting metrics-server=true in profile "embed-certs-239931"
	I0708 21:01:13.569098   59107 addons.go:69] Setting default-storageclass=true in profile "embed-certs-239931"
	I0708 21:01:13.569169   59107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-239931"
	I0708 21:01:13.569178   59107 config.go:182] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:01:13.569149   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.569185   59107 addons.go:234] Setting addon metrics-server=true in "embed-certs-239931"
	W0708 21:01:13.569244   59107 addons.go:243] addon metrics-server should already be in state true
	I0708 21:01:13.569274   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.569617   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569639   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569648   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.569671   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.569673   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.569698   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.570670   59107 out.go:177] * Verifying Kubernetes components...
	I0708 21:01:13.572338   59107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:01:13.590692   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40615
	I0708 21:01:13.590708   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0708 21:01:13.590701   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0708 21:01:13.591271   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591375   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591622   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.591792   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.591806   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.591888   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.591909   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.592348   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.592368   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.592387   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.592422   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.592655   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.593065   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.593092   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.593568   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.594139   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.594196   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.596834   59107 addons.go:234] Setting addon default-storageclass=true in "embed-certs-239931"
	W0708 21:01:13.596857   59107 addons.go:243] addon default-storageclass should already be in state true
	I0708 21:01:13.596892   59107 host.go:66] Checking if "embed-certs-239931" exists ...
	I0708 21:01:13.597258   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.597278   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.615398   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0708 21:01:13.616090   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.617374   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.617395   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.617542   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37809
	I0708 21:01:13.618025   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.618066   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.618450   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.618538   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.618563   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.618953   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.619151   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.621015   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.622114   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I0708 21:01:13.622533   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.623046   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.623071   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.623346   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.623757   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.624750   59107 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 21:01:13.625744   59107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:01:13.626604   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 21:01:13.626626   59107 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 21:01:13.626650   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.627717   59107 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:13.627737   59107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 21:01:13.627756   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.628207   59107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:13.628245   59107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:13.631548   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.633692   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.633737   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.634732   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.634960   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.635186   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.635262   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.635282   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.635415   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.635581   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.635946   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.636122   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.636282   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.636468   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.650948   59107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0708 21:01:13.651543   59107 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:13.652143   59107 main.go:141] libmachine: Using API Version  1
	I0708 21:01:13.652165   59107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:13.652659   59107 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:13.652835   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetState
	I0708 21:01:13.654717   59107 main.go:141] libmachine: (embed-certs-239931) Calling .DriverName
	I0708 21:01:13.654971   59107 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:13.654988   59107 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 21:01:13.655006   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHHostname
	I0708 21:01:13.658670   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.659361   59107 main.go:141] libmachine: (embed-certs-239931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:d9:ac", ip: ""} in network mk-embed-certs-239931: {Iface:virbr3 ExpiryTime:2024-07-08 21:55:44 +0000 UTC Type:0 Mac:52:54:00:b3:d9:ac Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:embed-certs-239931 Clientid:01:52:54:00:b3:d9:ac}
	I0708 21:01:13.659475   59107 main.go:141] libmachine: (embed-certs-239931) DBG | domain embed-certs-239931 has defined IP address 192.168.61.126 and MAC address 52:54:00:b3:d9:ac in network mk-embed-certs-239931
	I0708 21:01:13.659800   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHPort
	I0708 21:01:13.660109   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHKeyPath
	I0708 21:01:13.660275   59107 main.go:141] libmachine: (embed-certs-239931) Calling .GetSSHUsername
	I0708 21:01:13.660406   59107 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/embed-certs-239931/id_rsa Username:docker}
	I0708 21:01:13.813860   59107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:01:13.832841   59107 node_ready.go:35] waiting up to 6m0s for node "embed-certs-239931" to be "Ready" ...
	I0708 21:01:13.842398   59107 node_ready.go:49] node "embed-certs-239931" has status "Ready":"True"
	I0708 21:01:13.842420   59107 node_ready.go:38] duration metric: took 9.540746ms for node "embed-certs-239931" to be "Ready" ...
	I0708 21:01:13.842430   59107 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:13.853426   59107 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.861421   59107 pod_ready.go:92] pod "etcd-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.861451   59107 pod_ready.go:81] duration metric: took 7.991733ms for pod "etcd-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.861466   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.873198   59107 pod_ready.go:92] pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.873228   59107 pod_ready.go:81] duration metric: took 11.754017ms for pod "kube-apiserver-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.873243   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.882509   59107 pod_ready.go:92] pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.882560   59107 pod_ready.go:81] duration metric: took 9.307056ms for pod "kube-controller-manager-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.882574   59107 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.890814   59107 pod_ready.go:92] pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:13.890843   59107 pod_ready.go:81] duration metric: took 8.26049ms for pod "kube-scheduler-embed-certs-239931" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:13.890854   59107 pod_ready.go:38] duration metric: took 48.414688ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:13.890872   59107 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:13.890934   59107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:13.913170   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 21:01:13.913199   59107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 21:01:13.936334   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:13.942642   59107 api_server.go:72] duration metric: took 373.624334ms to wait for apiserver process to appear ...
	I0708 21:01:13.942673   59107 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:13.942696   59107 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0708 21:01:13.947241   59107 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0708 21:01:13.948330   59107 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:13.948354   59107 api_server.go:131] duration metric: took 5.673644ms to wait for apiserver health ...
	I0708 21:01:13.948364   59107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:13.968333   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:13.999888   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 21:01:13.999920   59107 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 21:01:14.072446   59107 system_pods.go:59] 5 kube-system pods found
	I0708 21:01:14.072553   59107 system_pods.go:61] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.072575   59107 system_pods.go:61] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.072594   59107 system_pods.go:61] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.072608   59107 system_pods.go:61] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending
	I0708 21:01:14.072621   59107 system_pods.go:61] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.072637   59107 system_pods.go:74] duration metric: took 124.266452ms to wait for pod list to return data ...
	I0708 21:01:14.072663   59107 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:14.111310   59107 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:14.111337   59107 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 21:01:14.196596   59107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:14.248043   59107 default_sa.go:45] found service account: "default"
	I0708 21:01:14.248075   59107 default_sa.go:55] duration metric: took 175.396297ms for default service account to be created ...
	I0708 21:01:14.248086   59107 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:14.381129   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.381166   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.381490   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.381507   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.381517   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.381525   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.383203   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:14.383213   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.383229   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.430533   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:14.430558   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:14.430835   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:14.431498   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:14.431558   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:14.440088   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.440129   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.440140   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.440148   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.440156   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.440162   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.440171   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.440176   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.440199   59107 retry.go:31] will retry after 211.74015ms: missing components: kube-dns, kube-proxy
	I0708 21:01:14.660845   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.660901   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.660916   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.660928   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.660938   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.660946   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.660990   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.661002   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.661036   59107 retry.go:31] will retry after 318.627165ms: missing components: kube-dns, kube-proxy
	I0708 21:01:14.988296   59107 system_pods.go:86] 7 kube-system pods found
	I0708 21:01:14.988336   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.988348   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:14.988359   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:14.988369   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:14.988376   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:14.988388   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:14.988398   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:14.988425   59107 retry.go:31] will retry after 333.622066ms: missing components: kube-dns, kube-proxy
	I0708 21:01:15.024853   59107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.056470802s)
	I0708 21:01:15.024902   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.024914   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.025237   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.025264   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.025266   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.025279   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.025288   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.025550   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.025566   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.348381   59107 system_pods.go:86] 8 kube-system pods found
	I0708 21:01:15.348419   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.348430   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.348440   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:15.348448   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:15.348455   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:15.348464   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 21:01:15.348473   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:15.348483   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:15.348502   59107 retry.go:31] will retry after 415.910372ms: missing components: kube-dns, kube-proxy
	I0708 21:01:15.736384   59107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.539741133s)
	I0708 21:01:15.736440   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.736456   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.736743   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.736782   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.736763   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.736803   59107 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:15.736851   59107 main.go:141] libmachine: (embed-certs-239931) Calling .Close
	I0708 21:01:15.737097   59107 main.go:141] libmachine: (embed-certs-239931) DBG | Closing plugin on server side
	I0708 21:01:15.737135   59107 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:15.737148   59107 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:15.737157   59107 addons.go:475] Verifying addon metrics-server=true in "embed-certs-239931"
	I0708 21:01:15.739025   59107 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0708 21:01:13.102963   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:15.601580   58678 pod_ready.go:102] pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:16.101049   58678 pod_ready.go:81] duration metric: took 4m0.00868677s for pod "metrics-server-569cc877fc-4kpfm" in "kube-system" namespace to be "Ready" ...
	E0708 21:01:16.101081   58678 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0708 21:01:16.101094   58678 pod_ready.go:38] duration metric: took 4m5.070908601s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:16.101112   58678 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:16.101147   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:16.101210   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:16.175601   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:16.175631   58678 cri.go:89] found id: ""
	I0708 21:01:16.175642   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:16.175703   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.182938   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:16.183013   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:16.261385   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:16.261411   58678 cri.go:89] found id: ""
	I0708 21:01:16.261423   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:16.261483   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.266231   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:16.266310   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:15.741167   59107 addons.go:510] duration metric: took 2.172185316s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0708 21:01:15.890659   59107 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:15.890702   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.890713   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:15.890723   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:15.890731   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:15.890738   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:15.890745   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Running
	I0708 21:01:15.890751   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:15.890759   59107 system_pods.go:89] "metrics-server-569cc877fc-f2dkn" [1d3c3e8e-356d-40b9-8add-35eec096e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:15.890772   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:15.890790   59107 retry.go:31] will retry after 557.749423ms: missing components: kube-dns
	I0708 21:01:16.457046   59107 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:16.457093   59107 system_pods.go:89] "coredns-7db6d8ff4d-l9xmm" [92723e6e-5bce-43ed-abdb-63120212456f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:16.457105   59107 system_pods.go:89] "coredns-7db6d8ff4d-qbqkx" [39e42c3f-d8a8-4907-b08d-ada6919b55c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 21:01:16.457114   59107 system_pods.go:89] "etcd-embed-certs-239931" [53fac5fc-faba-4d6a-ba2f-c5aabfb3f5bb] Running
	I0708 21:01:16.457124   59107 system_pods.go:89] "kube-apiserver-embed-certs-239931" [f0927911-79bf-40e8-848b-0d4be4443dc6] Running
	I0708 21:01:16.457131   59107 system_pods.go:89] "kube-controller-manager-embed-certs-239931" [f5b90b89-6d92-42e1-addf-82b817194ca2] Running
	I0708 21:01:16.457137   59107 system_pods.go:89] "kube-proxy-vkvf6" [d5f5061c-fd24-42eb-97b4-e5ec5f57c325] Running
	I0708 21:01:16.457143   59107 system_pods.go:89] "kube-scheduler-embed-certs-239931" [cc0d84b5-1b54-4a6d-8bb9-c5ffbfd6607f] Running
	I0708 21:01:16.457153   59107 system_pods.go:89] "metrics-server-569cc877fc-f2dkn" [1d3c3e8e-356d-40b9-8add-35eec096e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:16.457173   59107 system_pods.go:89] "storage-provisioner" [abe38aa1-fac7-4517-9b33-76f04d2a2f4e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 21:01:16.457183   59107 system_pods.go:126] duration metric: took 2.209089992s to wait for k8s-apps to be running ...
	I0708 21:01:16.457196   59107 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:16.457251   59107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:16.474652   59107 system_svc.go:56] duration metric: took 17.443712ms WaitForService to wait for kubelet
	I0708 21:01:16.474691   59107 kubeadm.go:576] duration metric: took 2.905677883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:16.474715   59107 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:16.478431   59107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:16.478456   59107 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:16.478480   59107 node_conditions.go:105] duration metric: took 3.758433ms to run NodePressure ...
	I0708 21:01:16.478502   59107 start.go:240] waiting for startup goroutines ...
	I0708 21:01:16.478515   59107 start.go:245] waiting for cluster config update ...
	I0708 21:01:16.478529   59107 start.go:254] writing updated cluster config ...
	I0708 21:01:16.478860   59107 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:16.536046   59107 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:16.538131   59107 out.go:177] * Done! kubectl is now configured to use "embed-certs-239931" cluster and "default" namespace by default
	I0708 21:01:12.440116   59655 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 21:01:12.440237   59655 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 21:01:13.441567   59655 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001312349s
	I0708 21:01:13.441690   59655 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 21:01:18.943345   59655 kubeadm.go:309] [api-check] The API server is healthy after 5.501634999s
	I0708 21:01:18.963728   59655 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 21:01:18.980036   59655 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 21:01:19.028362   59655 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 21:01:19.028635   59655 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-071971 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 21:01:19.051700   59655 kubeadm.go:309] [bootstrap-token] Using token: guoi3f.tsy4dvdlokyfqa2b
	I0708 21:01:19.053224   59655 out.go:204]   - Configuring RBAC rules ...
	I0708 21:01:19.053323   59655 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 21:01:19.063058   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 21:01:19.077711   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 21:01:19.090415   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 21:01:19.095539   59655 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 21:01:19.101465   59655 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 21:01:19.351634   59655 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 21:01:19.809053   59655 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 21:01:20.359069   59655 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 21:01:20.359125   59655 kubeadm.go:309] 
	I0708 21:01:20.359193   59655 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 21:01:20.359227   59655 kubeadm.go:309] 
	I0708 21:01:20.359368   59655 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 21:01:20.359379   59655 kubeadm.go:309] 
	I0708 21:01:20.359439   59655 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 21:01:20.359553   59655 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 21:01:20.359613   59655 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 21:01:20.359624   59655 kubeadm.go:309] 
	I0708 21:01:20.359686   59655 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 21:01:20.359694   59655 kubeadm.go:309] 
	I0708 21:01:20.359733   59655 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 21:01:20.359740   59655 kubeadm.go:309] 
	I0708 21:01:20.359787   59655 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 21:01:20.359899   59655 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 21:01:20.359994   59655 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 21:01:20.360003   59655 kubeadm.go:309] 
	I0708 21:01:20.360096   59655 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 21:01:20.360194   59655 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 21:01:20.360202   59655 kubeadm.go:309] 
	I0708 21:01:20.360311   59655 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token guoi3f.tsy4dvdlokyfqa2b \
	I0708 21:01:20.360468   59655 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 \
	I0708 21:01:20.360507   59655 kubeadm.go:309] 	--control-plane 
	I0708 21:01:20.360516   59655 kubeadm.go:309] 
	I0708 21:01:20.360628   59655 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 21:01:20.360639   59655 kubeadm.go:309] 
	I0708 21:01:20.360765   59655 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token guoi3f.tsy4dvdlokyfqa2b \
	I0708 21:01:20.360891   59655 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:146e95470dc1e86206b987567b5521d834d6bda070b68c4e1b8fe6916f7b79c0 
	I0708 21:01:20.361857   59655 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 21:01:20.361894   59655 cni.go:84] Creating CNI manager for ""
	I0708 21:01:20.361910   59655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:01:20.363579   59655 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 21:01:16.309299   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:16.309328   58678 cri.go:89] found id: ""
	I0708 21:01:16.309337   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:16.309403   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.314236   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:16.314320   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:16.371891   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:16.371919   58678 cri.go:89] found id: ""
	I0708 21:01:16.371937   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:16.372008   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.380409   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:16.380480   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:16.428411   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:16.428441   58678 cri.go:89] found id: ""
	I0708 21:01:16.428452   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:16.428514   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.433310   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:16.433390   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:16.474785   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:16.474807   58678 cri.go:89] found id: ""
	I0708 21:01:16.474816   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:16.474882   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.480849   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:16.480933   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:16.529115   58678 cri.go:89] found id: ""
	I0708 21:01:16.529136   58678 logs.go:276] 0 containers: []
	W0708 21:01:16.529146   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:16.529153   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:16.529222   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:16.576499   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:16.576519   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:16.576527   58678 cri.go:89] found id: ""
	I0708 21:01:16.576536   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:16.576584   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.581261   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:16.587704   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:16.587733   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:16.651329   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:16.651385   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:16.706341   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:16.706380   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:17.302518   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:17.302570   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:17.373619   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:17.373651   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:17.414687   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:17.414722   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:17.470462   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:17.470499   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:17.487151   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:17.487189   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:17.625611   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:17.625655   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:17.673291   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:17.673325   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:17.712222   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:17.712253   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:17.752635   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:17.752665   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:17.794056   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:17.794085   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:20.341805   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:20.362405   58678 api_server.go:72] duration metric: took 4m15.074761342s to wait for apiserver process to appear ...
	I0708 21:01:20.362430   58678 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:20.362465   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:20.362523   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:20.409947   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:20.409974   58678 cri.go:89] found id: ""
	I0708 21:01:20.409983   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:20.410040   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.414415   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:20.414476   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:20.463162   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:20.463186   58678 cri.go:89] found id: ""
	I0708 21:01:20.463196   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:20.463263   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.468905   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:20.468986   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:20.514265   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:20.514291   58678 cri.go:89] found id: ""
	I0708 21:01:20.514299   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:20.514357   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.519003   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:20.519081   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:20.565097   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:20.565122   58678 cri.go:89] found id: ""
	I0708 21:01:20.565132   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:20.565190   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.569971   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:20.570048   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:20.614435   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:20.614459   58678 cri.go:89] found id: ""
	I0708 21:01:20.614469   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:20.614525   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.619745   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:20.619824   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:20.660213   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:20.660235   58678 cri.go:89] found id: ""
	I0708 21:01:20.660242   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:20.660292   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.664740   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:20.664822   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:20.710279   58678 cri.go:89] found id: ""
	I0708 21:01:20.710300   58678 logs.go:276] 0 containers: []
	W0708 21:01:20.710307   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:20.710312   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:20.710359   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:20.751880   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:20.751906   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:20.751910   58678 cri.go:89] found id: ""
	I0708 21:01:20.751917   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:20.752028   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.756530   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:20.760679   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:20.760705   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:20.800525   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:20.800556   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:20.845629   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:20.845666   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:20.364837   59655 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 21:01:20.376977   59655 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 21:01:20.400133   59655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 21:01:20.400241   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:20.400291   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-071971 minikube.k8s.io/updated_at=2024_07_08T21_01_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=default-k8s-diff-port-071971 minikube.k8s.io/primary=true
	I0708 21:01:20.597429   59655 ops.go:34] apiserver oom_adj: -16
	I0708 21:01:20.597490   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.098582   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.597812   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:22.097790   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:21.356988   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:21.357025   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:21.416130   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:21.416160   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:21.431831   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:21.431865   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:21.479568   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:21.479597   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:21.527937   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:21.527970   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:21.569569   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:21.569605   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:21.691646   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:21.691678   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:21.737949   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:21.737975   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:21.789038   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:21.789069   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:21.831677   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:21.831703   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:24.380502   58678 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I0708 21:01:24.385139   58678 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I0708 21:01:24.386116   58678 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:24.386137   58678 api_server.go:131] duration metric: took 4.023699983s to wait for apiserver health ...
	I0708 21:01:24.386146   58678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:24.386171   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 21:01:24.386225   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 21:01:24.423786   58678 cri.go:89] found id: "32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:24.423809   58678 cri.go:89] found id: ""
	I0708 21:01:24.423816   58678 logs.go:276] 1 containers: [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4]
	I0708 21:01:24.423869   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.428385   58678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 21:01:24.428447   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 21:01:24.467186   58678 cri.go:89] found id: "3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:24.467206   58678 cri.go:89] found id: ""
	I0708 21:01:24.467213   58678 logs.go:276] 1 containers: [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919]
	I0708 21:01:24.467269   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.472208   58678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 21:01:24.472273   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 21:01:24.511157   58678 cri.go:89] found id: "d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:24.511188   58678 cri.go:89] found id: ""
	I0708 21:01:24.511199   58678 logs.go:276] 1 containers: [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46]
	I0708 21:01:24.511266   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.516077   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 21:01:24.516144   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 21:01:24.556095   58678 cri.go:89] found id: "7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:24.556115   58678 cri.go:89] found id: ""
	I0708 21:01:24.556122   58678 logs.go:276] 1 containers: [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a]
	I0708 21:01:24.556171   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.560735   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 21:01:24.560795   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 21:01:24.602473   58678 cri.go:89] found id: "abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:24.602498   58678 cri.go:89] found id: ""
	I0708 21:01:24.602508   58678 logs.go:276] 1 containers: [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b]
	I0708 21:01:24.602562   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.608926   58678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 21:01:24.609003   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 21:01:24.653230   58678 cri.go:89] found id: "2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:24.653258   58678 cri.go:89] found id: ""
	I0708 21:01:24.653267   58678 logs.go:276] 1 containers: [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06]
	I0708 21:01:24.653327   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.657884   58678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0708 21:01:24.657954   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0708 21:01:24.700775   58678 cri.go:89] found id: ""
	I0708 21:01:24.700800   58678 logs.go:276] 0 containers: []
	W0708 21:01:24.700810   58678 logs.go:278] No container was found matching "kindnet"
	I0708 21:01:24.700817   58678 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 21:01:24.700876   58678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 21:01:24.738593   58678 cri.go:89] found id: "7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:24.738619   58678 cri.go:89] found id: "a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:24.738625   58678 cri.go:89] found id: ""
	I0708 21:01:24.738633   58678 logs.go:276] 2 containers: [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a]
	I0708 21:01:24.738689   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.743324   58678 ssh_runner.go:195] Run: which crictl
	I0708 21:01:24.747684   58678 logs.go:123] Gathering logs for kubelet ...
	I0708 21:01:24.747709   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 21:01:24.800431   58678 logs.go:123] Gathering logs for describe nodes ...
	I0708 21:01:24.800467   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 21:01:24.910702   58678 logs.go:123] Gathering logs for kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] ...
	I0708 21:01:24.910738   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06"
	I0708 21:01:24.967323   58678 logs.go:123] Gathering logs for storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] ...
	I0708 21:01:24.967355   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a"
	I0708 21:01:25.012335   58678 logs.go:123] Gathering logs for CRI-O ...
	I0708 21:01:25.012367   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 21:01:25.393024   58678 logs.go:123] Gathering logs for container status ...
	I0708 21:01:25.393064   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 21:01:25.449280   58678 logs.go:123] Gathering logs for storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] ...
	I0708 21:01:25.449315   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b"
	I0708 21:01:25.488676   58678 logs.go:123] Gathering logs for dmesg ...
	I0708 21:01:25.488703   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 21:01:25.503705   58678 logs.go:123] Gathering logs for kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] ...
	I0708 21:01:25.503734   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4"
	I0708 21:01:25.551111   58678 logs.go:123] Gathering logs for etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] ...
	I0708 21:01:25.551155   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919"
	I0708 21:01:25.598388   58678 logs.go:123] Gathering logs for coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] ...
	I0708 21:01:25.598425   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46"
	I0708 21:01:25.642052   58678 logs.go:123] Gathering logs for kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] ...
	I0708 21:01:25.642087   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a"
	I0708 21:01:25.680632   58678 logs.go:123] Gathering logs for kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] ...
	I0708 21:01:25.680665   58678 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b"
	I0708 21:01:22.597628   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:23.098128   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:23.597756   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:24.097555   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:24.598149   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:25.098149   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:25.598255   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:26.097514   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:26.598211   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:27.097610   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.229251   58678 system_pods.go:59] 8 kube-system pods found
	I0708 21:01:28.229286   58678 system_pods.go:61] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running
	I0708 21:01:28.229293   58678 system_pods.go:61] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running
	I0708 21:01:28.229298   58678 system_pods.go:61] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running
	I0708 21:01:28.229304   58678 system_pods.go:61] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running
	I0708 21:01:28.229308   58678 system_pods.go:61] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 21:01:28.229312   58678 system_pods.go:61] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running
	I0708 21:01:28.229321   58678 system_pods.go:61] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:28.229327   58678 system_pods.go:61] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 21:01:28.229337   58678 system_pods.go:74] duration metric: took 3.843183956s to wait for pod list to return data ...
	I0708 21:01:28.229347   58678 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:28.232297   58678 default_sa.go:45] found service account: "default"
	I0708 21:01:28.232323   58678 default_sa.go:55] duration metric: took 2.96709ms for default service account to be created ...
	I0708 21:01:28.232333   58678 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:28.240720   58678 system_pods.go:86] 8 kube-system pods found
	I0708 21:01:28.240750   58678 system_pods.go:89] "coredns-7db6d8ff4d-bb6cr" [5c1efedb-97f2-4bf0-a182-b8329b3bc6f1] Running
	I0708 21:01:28.240755   58678 system_pods.go:89] "etcd-no-preload-028021" [c048e725-a499-48f4-8de7-2e68b71887ac] Running
	I0708 21:01:28.240760   58678 system_pods.go:89] "kube-apiserver-no-preload-028021" [0375461d-0a2d-4657-8d87-2426d9c3f304] Running
	I0708 21:01:28.240765   58678 system_pods.go:89] "kube-controller-manager-no-preload-028021" [9b4183a1-709c-47d4-b267-977abaafd82c] Running
	I0708 21:01:28.240770   58678 system_pods.go:89] "kube-proxy-6p6l6" [dfa04234-ad5a-4a24-b6a5-152933bb12b9] Running
	I0708 21:01:28.240774   58678 system_pods.go:89] "kube-scheduler-no-preload-028021" [8df4b039-4751-46e8-a7c5-07c2c50b84d4] Running
	I0708 21:01:28.240781   58678 system_pods.go:89] "metrics-server-569cc877fc-4kpfm" [c37f4622-163f-48bf-9bb4-5a20b88187ad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:28.240787   58678 system_pods.go:89] "storage-provisioner" [aca0a23e-8d09-4541-b80b-87242bed8483] Running
	I0708 21:01:28.240794   58678 system_pods.go:126] duration metric: took 8.454141ms to wait for k8s-apps to be running ...
	I0708 21:01:28.240804   58678 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:28.240855   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:28.256600   58678 system_svc.go:56] duration metric: took 15.789082ms WaitForService to wait for kubelet
	I0708 21:01:28.256630   58678 kubeadm.go:576] duration metric: took 4m22.968988646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:28.256654   58678 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:28.260384   58678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:28.260402   58678 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:28.260412   58678 node_conditions.go:105] duration metric: took 3.753004ms to run NodePressure ...
	I0708 21:01:28.260422   58678 start.go:240] waiting for startup goroutines ...
	I0708 21:01:28.260429   58678 start.go:245] waiting for cluster config update ...
	I0708 21:01:28.260438   58678 start.go:254] writing updated cluster config ...
	I0708 21:01:28.260686   58678 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:28.311517   58678 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:28.313560   58678 out.go:177] * Done! kubectl is now configured to use "no-preload-028021" cluster and "default" namespace by default
	I0708 21:01:27.598457   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.098475   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:28.598380   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:29.097496   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:29.598229   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:30.097844   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:30.598323   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:31.097781   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:31.598085   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:32.098438   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:32.598450   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.098414   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.597823   59655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 21:01:33.688717   59655 kubeadm.go:1107] duration metric: took 13.288534329s to wait for elevateKubeSystemPrivileges
	W0708 21:01:33.688756   59655 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 21:01:33.688765   59655 kubeadm.go:393] duration metric: took 5m12.976251287s to StartCluster
	I0708 21:01:33.688782   59655 settings.go:142] acquiring lock: {Name:mka7933f9afb0721d6f23c45eb713774ed1c0fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:33.688874   59655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:01:33.690446   59655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:01:33.690691   59655 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.163 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:01:33.690814   59655 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 21:01:33.690875   59655 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071971"
	I0708 21:01:33.690893   59655 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:01:33.690907   59655 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071971"
	I0708 21:01:33.690902   59655 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071971"
	W0708 21:01:33.690915   59655 addons.go:243] addon storage-provisioner should already be in state true
	I0708 21:01:33.690914   59655 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071971"
	I0708 21:01:33.690939   59655 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071971"
	I0708 21:01:33.690945   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.690957   59655 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071971"
	W0708 21:01:33.690968   59655 addons.go:243] addon metrics-server should already be in state true
	I0708 21:01:33.691002   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.691272   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691274   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691294   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.691299   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.691323   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.691361   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.692506   59655 out.go:177] * Verifying Kubernetes components...
	I0708 21:01:33.694134   59655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:01:33.708343   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0708 21:01:33.708681   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0708 21:01:33.708849   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.709011   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.709402   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.709421   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.709559   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.709578   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.709795   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.709864   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.710365   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.710411   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.710417   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.710445   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.710809   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0708 21:01:33.711278   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.711858   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.711892   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.712294   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.712604   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.716565   59655 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071971"
	W0708 21:01:33.716590   59655 addons.go:243] addon default-storageclass should already be in state true
	I0708 21:01:33.716620   59655 host.go:66] Checking if "default-k8s-diff-port-071971" exists ...
	I0708 21:01:33.716990   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.717041   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.728113   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0708 21:01:33.728257   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0708 21:01:33.728694   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.728742   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.729182   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.729211   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.729331   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.729353   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.729605   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.729663   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.729781   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.729846   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.731832   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.731878   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.734021   59655 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 21:01:33.734026   59655 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0708 21:01:33.736062   59655 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:33.736094   59655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 21:01:33.736122   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.736174   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 21:01:33.736192   59655 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0708 21:01:33.736222   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.736793   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0708 21:01:33.737419   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.739820   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.739837   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.740075   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740272   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.740463   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.740484   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740512   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.740818   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.740967   59655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:01:33.741060   59655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:01:33.741213   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.741225   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.741279   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.741309   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.741438   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.741596   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.741587   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.741730   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.741820   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.758223   59655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0708 21:01:33.758739   59655 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:01:33.759237   59655 main.go:141] libmachine: Using API Version  1
	I0708 21:01:33.759254   59655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:01:33.759633   59655 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:01:33.759909   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetState
	I0708 21:01:33.761455   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .DriverName
	I0708 21:01:33.761644   59655 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:33.761656   59655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 21:01:33.761669   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHHostname
	I0708 21:01:33.764245   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.764541   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:a7:be", ip: ""} in network mk-default-k8s-diff-port-071971: {Iface:virbr1 ExpiryTime:2024-07-08 21:50:10 +0000 UTC Type:0 Mac:52:54:00:40:a7:be Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:default-k8s-diff-port-071971 Clientid:01:52:54:00:40:a7:be}
	I0708 21:01:33.764563   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | domain default-k8s-diff-port-071971 has defined IP address 192.168.72.163 and MAC address 52:54:00:40:a7:be in network mk-default-k8s-diff-port-071971
	I0708 21:01:33.764701   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHPort
	I0708 21:01:33.764872   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHKeyPath
	I0708 21:01:33.765022   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .GetSSHUsername
	I0708 21:01:33.765126   59655 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/default-k8s-diff-port-071971/id_rsa Username:docker}
	I0708 21:01:33.926862   59655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:01:33.980155   59655 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071971" to be "Ready" ...
	I0708 21:01:33.993505   59655 node_ready.go:49] node "default-k8s-diff-port-071971" has status "Ready":"True"
	I0708 21:01:33.993526   59655 node_ready.go:38] duration metric: took 13.344616ms for node "default-k8s-diff-port-071971" to be "Ready" ...
	I0708 21:01:33.993534   59655 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:34.001402   59655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:34.045900   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 21:01:34.058039   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 21:01:34.058059   59655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0708 21:01:34.102931   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 21:01:34.121513   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 21:01:34.121541   59655 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0708 21:01:34.190181   59655 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:34.190208   59655 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0708 21:01:34.232200   59655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 21:01:35.071867   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.071888   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.071977   59655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.026035336s)
	I0708 21:01:35.072026   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.072044   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.072157   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.072192   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.072205   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.072212   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.073887   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.073887   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.073917   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.073989   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.074003   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.074013   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.073907   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.074111   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.074438   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.074461   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.146813   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.146840   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.147181   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.147201   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.337952   59655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.105709862s)
	I0708 21:01:35.338010   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.338023   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.338415   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) DBG | Closing plugin on server side
	I0708 21:01:35.338447   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.338461   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.338471   59655 main.go:141] libmachine: Making call to close driver server
	I0708 21:01:35.338484   59655 main.go:141] libmachine: (default-k8s-diff-port-071971) Calling .Close
	I0708 21:01:35.338733   59655 main.go:141] libmachine: Successfully made call to close driver server
	I0708 21:01:35.338751   59655 main.go:141] libmachine: Making call to close connection to plugin binary
	I0708 21:01:35.338763   59655 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-071971"
	I0708 21:01:35.340678   59655 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0708 21:01:35.341902   59655 addons.go:510] duration metric: took 1.651084154s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0708 21:01:36.011439   59655 pod_ready.go:102] pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace has status "Ready":"False"
	I0708 21:01:37.008538   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.008567   59655 pod_ready.go:81] duration metric: took 3.0071384s for pod "coredns-7db6d8ff4d-8msvk" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.008582   59655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.013291   59655 pod_ready.go:92] pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.013313   59655 pod_ready.go:81] duration metric: took 4.723566ms for pod "coredns-7db6d8ff4d-hq7zj" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.013326   59655 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.017974   59655 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.017997   59655 pod_ready.go:81] duration metric: took 4.66297ms for pod "etcd-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.018009   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.022526   59655 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.022550   59655 pod_ready.go:81] duration metric: took 4.533312ms for pod "kube-apiserver-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.022563   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.027009   59655 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.027032   59655 pod_ready.go:81] duration metric: took 4.462202ms for pod "kube-controller-manager-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.027042   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l2mdd" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.406030   59655 pod_ready.go:92] pod "kube-proxy-l2mdd" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.406055   59655 pod_ready.go:81] duration metric: took 379.00677ms for pod "kube-proxy-l2mdd" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.406064   59655 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.806120   59655 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace has status "Ready":"True"
	I0708 21:01:37.806141   59655 pod_ready.go:81] duration metric: took 400.070718ms for pod "kube-scheduler-default-k8s-diff-port-071971" in "kube-system" namespace to be "Ready" ...
	I0708 21:01:37.806151   59655 pod_ready.go:38] duration metric: took 3.812606006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 21:01:37.806165   59655 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:01:37.806214   59655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:01:37.822846   59655 api_server.go:72] duration metric: took 4.132126389s to wait for apiserver process to appear ...
	I0708 21:01:37.822872   59655 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:01:37.822889   59655 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8444/healthz ...
	I0708 21:01:37.827017   59655 api_server.go:279] https://192.168.72.163:8444/healthz returned 200:
	ok
	I0708 21:01:37.827906   59655 api_server.go:141] control plane version: v1.30.2
	I0708 21:01:37.827930   59655 api_server.go:131] duration metric: took 5.051704ms to wait for apiserver health ...
	I0708 21:01:37.827938   59655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 21:01:38.010909   59655 system_pods.go:59] 9 kube-system pods found
	I0708 21:01:38.010937   59655 system_pods.go:61] "coredns-7db6d8ff4d-8msvk" [38c1e0eb-5eb4-4acb-a5ae-c72871884e3d] Running
	I0708 21:01:38.010942   59655 system_pods.go:61] "coredns-7db6d8ff4d-hq7zj" [ddb0f99d-a91d-4bb7-96e7-695b6101a601] Running
	I0708 21:01:38.010946   59655 system_pods.go:61] "etcd-default-k8s-diff-port-071971" [e3399214-404c-423e-9648-b4d920028a92] Running
	I0708 21:01:38.010949   59655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071971" [7b726b49-c243-4126-b6d2-fc12abc9a042] Running
	I0708 21:01:38.010953   59655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071971" [6a731125-daa4-4da1-b9e0-1206da592fde] Running
	I0708 21:01:38.010956   59655 system_pods.go:61] "kube-proxy-l2mdd" [b1d70ae2-ed86-49bd-8910-a12c5cd8091a] Running
	I0708 21:01:38.010959   59655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071971" [dc238033-038e-49ec-ba48-392b0ec2f7bd] Running
	I0708 21:01:38.010965   59655 system_pods.go:61] "metrics-server-569cc877fc-k8vhl" [09f957f3-d76f-4f21-b9a6-e5b249d07e1e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:38.010970   59655 system_pods.go:61] "storage-provisioner" [805a8fdb-ed9e-4f80-a2c9-7d8a0155b228] Running
	I0708 21:01:38.010979   59655 system_pods.go:74] duration metric: took 183.034922ms to wait for pod list to return data ...
	I0708 21:01:38.010987   59655 default_sa.go:34] waiting for default service account to be created ...
	I0708 21:01:38.205307   59655 default_sa.go:45] found service account: "default"
	I0708 21:01:38.205331   59655 default_sa.go:55] duration metric: took 194.338319ms for default service account to be created ...
	I0708 21:01:38.205340   59655 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 21:01:38.410958   59655 system_pods.go:86] 9 kube-system pods found
	I0708 21:01:38.410988   59655 system_pods.go:89] "coredns-7db6d8ff4d-8msvk" [38c1e0eb-5eb4-4acb-a5ae-c72871884e3d] Running
	I0708 21:01:38.410995   59655 system_pods.go:89] "coredns-7db6d8ff4d-hq7zj" [ddb0f99d-a91d-4bb7-96e7-695b6101a601] Running
	I0708 21:01:38.411000   59655 system_pods.go:89] "etcd-default-k8s-diff-port-071971" [e3399214-404c-423e-9648-b4d920028a92] Running
	I0708 21:01:38.411005   59655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071971" [7b726b49-c243-4126-b6d2-fc12abc9a042] Running
	I0708 21:01:38.411009   59655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071971" [6a731125-daa4-4da1-b9e0-1206da592fde] Running
	I0708 21:01:38.411013   59655 system_pods.go:89] "kube-proxy-l2mdd" [b1d70ae2-ed86-49bd-8910-a12c5cd8091a] Running
	I0708 21:01:38.411017   59655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071971" [dc238033-038e-49ec-ba48-392b0ec2f7bd] Running
	I0708 21:01:38.411024   59655 system_pods.go:89] "metrics-server-569cc877fc-k8vhl" [09f957f3-d76f-4f21-b9a6-e5b249d07e1e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0708 21:01:38.411029   59655 system_pods.go:89] "storage-provisioner" [805a8fdb-ed9e-4f80-a2c9-7d8a0155b228] Running
	I0708 21:01:38.411040   59655 system_pods.go:126] duration metric: took 205.695019ms to wait for k8s-apps to be running ...
	I0708 21:01:38.411050   59655 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 21:01:38.411092   59655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 21:01:38.428218   59655 system_svc.go:56] duration metric: took 17.158999ms WaitForService to wait for kubelet
	I0708 21:01:38.428248   59655 kubeadm.go:576] duration metric: took 4.737530934s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:01:38.428270   59655 node_conditions.go:102] verifying NodePressure condition ...
	I0708 21:01:38.606369   59655 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 21:01:38.606394   59655 node_conditions.go:123] node cpu capacity is 2
	I0708 21:01:38.606404   59655 node_conditions.go:105] duration metric: took 178.130401ms to run NodePressure ...
	I0708 21:01:38.606415   59655 start.go:240] waiting for startup goroutines ...
	I0708 21:01:38.606423   59655 start.go:245] waiting for cluster config update ...
	I0708 21:01:38.606432   59655 start.go:254] writing updated cluster config ...
	I0708 21:01:38.606686   59655 ssh_runner.go:195] Run: rm -f paused
	I0708 21:01:38.657280   59655 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0708 21:01:38.659556   59655 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-071971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.732544295Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473120732518245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90817b01-426a-4f03-a6b9-4f8b11ac1f45 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.733298688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d797c60-59d2-41d2-ab79-1536cb29f9fe name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.733397271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d797c60-59d2-41d2-ab79-1536cb29f9fe name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.733447226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5d797c60-59d2-41d2-ab79-1536cb29f9fe name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.770061120Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db561684-5d05-4805-a192-d5cfa514a67f name=/runtime.v1.RuntimeService/Version
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.770190409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db561684-5d05-4805-a192-d5cfa514a67f name=/runtime.v1.RuntimeService/Version
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.771704409Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=832bd31f-d6b7-4fa5-bb71-91bf90e8b473 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.772109903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473120772074904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=832bd31f-d6b7-4fa5-bb71-91bf90e8b473 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.772723930Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbd45af4-c0e0-48f1-9134-84735602cb2a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.772797406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbd45af4-c0e0-48f1-9134-84735602cb2a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.772837078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cbd45af4-c0e0-48f1-9134-84735602cb2a name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.809074232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a91173a-d850-4a91-9307-32882d975f9b name=/runtime.v1.RuntimeService/Version
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.809171976Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a91173a-d850-4a91-9307-32882d975f9b name=/runtime.v1.RuntimeService/Version
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.810709515Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab65a9b7-3f5a-48e0-8cdd-bfbdff942f83 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.811186919Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473120811157720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab65a9b7-3f5a-48e0-8cdd-bfbdff942f83 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.811965914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00fb503d-ffdb-4e9e-839b-9dc4c60bc2a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.812065262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00fb503d-ffdb-4e9e-839b-9dc4c60bc2a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.812130904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=00fb503d-ffdb-4e9e-839b-9dc4c60bc2a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.848056173Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a06e946-95c4-49ec-83a4-c03c3abe8e11 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.848151065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a06e946-95c4-49ec-83a4-c03c3abe8e11 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.849624295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d924bdf-cf6c-43de-9555-0b982c483b4a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.850055927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473120850027075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d924bdf-cf6c-43de-9555-0b982c483b4a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.850849374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac5f2f5b-b8c0-4bec-8498-a580124ac542 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.850953819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac5f2f5b-b8c0-4bec-8498-a580124ac542 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:12:00 old-k8s-version-914355 crio[647]: time="2024-07-08 21:12:00.850995414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ac5f2f5b-b8c0-4bec-8498-a580124ac542 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul 8 20:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050631] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039837] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.623579] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul 8 20:49] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.602924] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.192762] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.057317] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062771] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.200906] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.157667] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.288740] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +6.100045] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.067577] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.762847] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +12.466178] kauditd_printk_skb: 46 callbacks suppressed
	[Jul 8 20:53] systemd-fstab-generator[5013]: Ignoring "noauto" option for root device
	[Jul 8 20:55] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.059941] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:12:01 up 23 min,  0 users,  load average: 0.00, 0.01, 0.00
	Linux old-k8s-version-914355 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000b2e4b0, 0xc00096e540, 0x23, 0xc000387e40)
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]: created by internal/singleflight.(*Group).DoChan
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]: goroutine 166 [runnable]:
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]: net._C2func_getaddrinfo(0xc000b361e0, 0x0, 0xc00096ba10, 0xc000b23290, 0x0, 0x0, 0x0)
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]:         _cgo_gotypes.go:94 +0x55
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]: net.cgoLookupIPCNAME.func1(0xc000b361e0, 0x20, 0x20, 0xc00096ba10, 0xc000b23290, 0x0, 0xc0005596a0, 0x57a492)
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc00096e510, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]: net.cgoIPLookup(0xc000a510e0, 0x48ab5d6, 0x3, 0xc00096e510, 0x1f)
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]: created by net.cgoLookupIP
	Jul 08 21:11:58 old-k8s-version-914355 kubelet[7100]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jul 08 21:11:58 old-k8s-version-914355 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 08 21:11:58 old-k8s-version-914355 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 08 21:11:59 old-k8s-version-914355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 173.
	Jul 08 21:11:59 old-k8s-version-914355 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 08 21:11:59 old-k8s-version-914355 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 08 21:11:59 old-k8s-version-914355 kubelet[7109]: I0708 21:11:59.345788    7109 server.go:416] Version: v1.20.0
	Jul 08 21:11:59 old-k8s-version-914355 kubelet[7109]: I0708 21:11:59.346066    7109 server.go:837] Client rotation is on, will bootstrap in background
	Jul 08 21:11:59 old-k8s-version-914355 kubelet[7109]: I0708 21:11:59.348050    7109 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 08 21:11:59 old-k8s-version-914355 kubelet[7109]: I0708 21:11:59.349155    7109 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 08 21:11:59 old-k8s-version-914355 kubelet[7109]: W0708 21:11:59.349228    7109 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914355 -n old-k8s-version-914355
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 2 (258.019954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-914355" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (340.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (415.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-239931 -n embed-certs-239931
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-08 21:17:12.529091702 +0000 UTC m=+6492.441969509
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-239931 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-239931 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.334µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-239931 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-239931 -n embed-certs-239931
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-239931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-239931 logs -n 25: (1.529764821s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-028021             | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-914355             | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-239931            | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-733920 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-733920                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:50 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028021                  | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071971  | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-239931                 | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071971       | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC | 08 Jul 24 21:01 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 21:12 UTC | 08 Jul 24 21:12 UTC |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:16 UTC | 08 Jul 24 21:16 UTC |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:17 UTC |
	| start   | -p stopped-upgrade-957981                              | minikube                     | jenkins | v1.26.0 | 08 Jul 24 21:17 UTC |                     |
	|         | --memory=2200 --vm-driver=kvm2                         |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 21:17:03
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 21:17:03.692626   66211 out.go:296] Setting OutFile to fd 1 ...
	I0708 21:17:03.692864   66211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0708 21:17:03.692867   66211 out.go:309] Setting ErrFile to fd 2...
	I0708 21:17:03.692871   66211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0708 21:17:03.693443   66211 root.go:329] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 21:17:03.693728   66211 out.go:303] Setting JSON to false
	I0708 21:17:03.694659   66211 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7173,"bootTime":1720466251,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 21:17:03.694733   66211 start.go:125] virtualization: kvm guest
	I0708 21:17:03.697175   66211 out.go:177] * [stopped-upgrade-957981] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0708 21:17:03.699084   66211 notify.go:193] Checking for updates...
	I0708 21:17:03.699099   66211 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 21:17:03.700777   66211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 21:17:03.702554   66211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:17:03.704087   66211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 21:17:03.705806   66211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 21:17:03.707679   66211 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig2281972767
	I0708 21:17:03.709540   66211 config.go:178] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:17:03.709686   66211 config.go:178] Loaded profile config "embed-certs-239931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:17:03.709798   66211 config.go:178] Loaded profile config "kubernetes-upgrade-467273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:17:03.709907   66211 driver.go:360] Setting default libvirt URI to qemu:///system
	I0708 21:17:03.758190   66211 out.go:177] * Using the kvm2 driver based on user configuration
	I0708 21:17:03.759378   66211 start.go:284] selected driver: kvm2
	I0708 21:17:03.759388   66211 start.go:805] validating driver "kvm2" against <nil>
	I0708 21:17:03.759407   66211 start.go:816] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 21:17:03.760431   66211 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 21:17:03.760702   66211 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 21:17:03.777870   66211 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 21:17:03.777944   66211 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0708 21:17:03.778146   66211 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 21:17:03.778163   66211 cni.go:95] Creating CNI manager for ""
	I0708 21:17:03.778173   66211 cni.go:165] "kvm2" driver + crio runtime found, recommending bridge
	I0708 21:17:03.778177   66211 start_flags.go:305] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 21:17:03.778183   66211 start_flags.go:310] config:
	{Name:stopped-upgrade-957981 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-957981 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0708 21:17:03.778266   66211 iso.go:128] acquiring lock: {Name:mk301ace514d0228cd573610dabd11cf915144a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 21:17:03.780542   66211 out.go:177] * Starting control plane node stopped-upgrade-957981 in cluster stopped-upgrade-957981
	I0708 21:17:03.781952   66211 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0708 21:17:03.782004   66211 preload.go:148] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0708 21:17:03.782013   66211 cache.go:57] Caching tarball of preloaded images
	I0708 21:17:03.782209   66211 preload.go:174] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 21:17:03.782236   66211 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.1 on crio
	I0708 21:17:03.782373   66211 profile.go:148] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/stopped-upgrade-957981/config.json ...
	I0708 21:17:03.782393   66211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/stopped-upgrade-957981/config.json: {Name:mk73e428528851bae9aee142897dcd8e1f1a15f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:17:03.782642   66211 cache.go:208] Successfully downloaded all kic artifacts
	I0708 21:17:03.782704   66211 start.go:352] acquiring machines lock for stopped-upgrade-957981: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 21:17:03.782765   66211 start.go:356] acquired machines lock for "stopped-upgrade-957981" in 47.164µs
	I0708 21:17:03.782791   66211 start.go:91] Provisioning new machine with config: &{Name:stopped-upgrade-957981 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopp
ed-upgrade-957981 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:17:03.782866   66211 start.go:131] createHost starting for "" (driver="kvm2")
	I0708 21:17:01.316264   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetIP
	I0708 21:17:01.319353   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:01.319807   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6e:d6", ip: ""} in network mk-kubernetes-upgrade-467273: {Iface:virbr2 ExpiryTime:2024-07-08 22:16:51 +0000 UTC Type:0 Mac:52:54:00:16:6e:d6 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:kubernetes-upgrade-467273 Clientid:01:52:54:00:16:6e:d6}
	I0708 21:17:01.319841   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined IP address 192.168.50.94 and MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:17:01.320141   65818 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0708 21:17:01.327331   65818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 21:17:01.347079   65818 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-467273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 21:17:01.347228   65818 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 21:17:01.347291   65818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 21:17:01.409995   65818 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 21:17:01.410085   65818 ssh_runner.go:195] Run: which lz4
	I0708 21:17:01.414827   65818 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 21:17:01.419694   65818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 21:17:01.419732   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 21:17:03.055499   65818 crio.go:462] duration metric: took 1.640695269s to copy over tarball
	I0708 21:17:03.055588   65818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 21:17:03.784790   66211 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 21:17:03.784944   66211 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:17:03.784997   66211 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0708 21:17:03.804236   66211 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33187
	I0708 21:17:03.804883   66211 main.go:134] libmachine: () Calling .GetVersion
	I0708 21:17:03.805721   66211 main.go:134] libmachine: Using API Version  1
	I0708 21:17:03.805743   66211 main.go:134] libmachine: () Calling .SetConfigRaw
	I0708 21:17:03.806144   66211 main.go:134] libmachine: () Calling .GetMachineName
	I0708 21:17:03.806391   66211 main.go:134] libmachine: (stopped-upgrade-957981) Calling .GetMachineName
	I0708 21:17:03.806576   66211 main.go:134] libmachine: (stopped-upgrade-957981) Calling .DriverName
	I0708 21:17:03.806737   66211 start.go:165] libmachine.API.Create for "stopped-upgrade-957981" (driver="kvm2")
	I0708 21:17:03.806757   66211 client.go:168] LocalClient.Create starting
	I0708 21:17:03.806810   66211 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 21:17:03.806840   66211 main.go:134] libmachine: Decoding PEM data...
	I0708 21:17:03.806851   66211 main.go:134] libmachine: Parsing certificate...
	I0708 21:17:03.806915   66211 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 21:17:03.806929   66211 main.go:134] libmachine: Decoding PEM data...
	I0708 21:17:03.806938   66211 main.go:134] libmachine: Parsing certificate...
	I0708 21:17:03.806952   66211 main.go:134] libmachine: Running pre-create checks...
	I0708 21:17:03.806965   66211 main.go:134] libmachine: (stopped-upgrade-957981) Calling .PreCreateCheck
	I0708 21:17:03.807374   66211 main.go:134] libmachine: (stopped-upgrade-957981) Calling .GetConfigRaw
	I0708 21:17:03.807906   66211 main.go:134] libmachine: Creating machine...
	I0708 21:17:03.807918   66211 main.go:134] libmachine: (stopped-upgrade-957981) Calling .Create
	I0708 21:17:03.808085   66211 main.go:134] libmachine: (stopped-upgrade-957981) Creating KVM machine...
	I0708 21:17:03.809713   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | found existing default KVM network
	I0708 21:17:03.811781   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:03.811579   66235 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030a000}
	I0708 21:17:03.811857   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | created network xml: 
	I0708 21:17:03.811877   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | <network>
	I0708 21:17:03.811889   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG |   <name>mk-stopped-upgrade-957981</name>
	I0708 21:17:03.811897   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG |   <dns enable='no'/>
	I0708 21:17:03.811906   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG |   
	I0708 21:17:03.811917   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0708 21:17:03.811926   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG |     <dhcp>
	I0708 21:17:03.811935   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0708 21:17:03.811945   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG |     </dhcp>
	I0708 21:17:03.811955   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG |   </ip>
	I0708 21:17:03.811966   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG |   
	I0708 21:17:03.811972   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | </network>
	I0708 21:17:03.811980   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | 
	I0708 21:17:03.818122   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | trying to create private KVM network mk-stopped-upgrade-957981 192.168.39.0/24...
	I0708 21:17:03.899478   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | private KVM network mk-stopped-upgrade-957981 192.168.39.0/24 created
	I0708 21:17:03.899500   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:03.899406   66235 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:17:03.899524   66211 main.go:134] libmachine: (stopped-upgrade-957981) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/stopped-upgrade-957981 ...
	I0708 21:17:03.899536   66211 main.go:134] libmachine: (stopped-upgrade-957981) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso
	I0708 21:17:03.899556   66211 main.go:134] libmachine: (stopped-upgrade-957981) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso...
	I0708 21:17:04.114726   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:04.114581   66235 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/stopped-upgrade-957981/id_rsa...
	I0708 21:17:04.330178   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:04.330008   66235 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/stopped-upgrade-957981/stopped-upgrade-957981.rawdisk...
	I0708 21:17:04.330207   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | Writing magic tar header
	I0708 21:17:04.330225   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | Writing SSH key tar header
	I0708 21:17:04.330238   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:04.330132   66235 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/stopped-upgrade-957981 ...
	I0708 21:17:04.330258   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/stopped-upgrade-957981
	I0708 21:17:04.330272   66211 main.go:134] libmachine: (stopped-upgrade-957981) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/stopped-upgrade-957981 (perms=drwx------)
	I0708 21:17:04.330283   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 21:17:04.330296   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:17:04.330307   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 21:17:04.330329   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 21:17:04.330338   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | Checking permissions on dir: /home/jenkins
	I0708 21:17:04.330348   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | Checking permissions on dir: /home
	I0708 21:17:04.330360   66211 main.go:134] libmachine: (stopped-upgrade-957981) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 21:17:04.330373   66211 main.go:134] libmachine: (stopped-upgrade-957981) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 21:17:04.330385   66211 main.go:134] libmachine: (stopped-upgrade-957981) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 21:17:04.330393   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | Skipping /home - not owner
	I0708 21:17:04.330408   66211 main.go:134] libmachine: (stopped-upgrade-957981) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 21:17:04.330417   66211 main.go:134] libmachine: (stopped-upgrade-957981) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 21:17:04.330428   66211 main.go:134] libmachine: (stopped-upgrade-957981) Creating domain...
	I0708 21:17:04.331744   66211 main.go:134] libmachine: (stopped-upgrade-957981) define libvirt domain using xml: 
	I0708 21:17:04.331762   66211 main.go:134] libmachine: (stopped-upgrade-957981) <domain type='kvm'>
	I0708 21:17:04.331775   66211 main.go:134] libmachine: (stopped-upgrade-957981)   <name>stopped-upgrade-957981</name>
	I0708 21:17:04.331787   66211 main.go:134] libmachine: (stopped-upgrade-957981)   <memory unit='MiB'>2200</memory>
	I0708 21:17:04.331819   66211 main.go:134] libmachine: (stopped-upgrade-957981)   <vcpu>2</vcpu>
	I0708 21:17:04.331830   66211 main.go:134] libmachine: (stopped-upgrade-957981)   <features>
	I0708 21:17:04.331839   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <acpi/>
	I0708 21:17:04.331855   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <apic/>
	I0708 21:17:04.331889   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <pae/>
	I0708 21:17:04.331908   66211 main.go:134] libmachine: (stopped-upgrade-957981)     
	I0708 21:17:04.331918   66211 main.go:134] libmachine: (stopped-upgrade-957981)   </features>
	I0708 21:17:04.331927   66211 main.go:134] libmachine: (stopped-upgrade-957981)   <cpu mode='host-passthrough'>
	I0708 21:17:04.331936   66211 main.go:134] libmachine: (stopped-upgrade-957981)   
	I0708 21:17:04.331944   66211 main.go:134] libmachine: (stopped-upgrade-957981)   </cpu>
	I0708 21:17:04.331951   66211 main.go:134] libmachine: (stopped-upgrade-957981)   <os>
	I0708 21:17:04.331956   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <type>hvm</type>
	I0708 21:17:04.331969   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <boot dev='cdrom'/>
	I0708 21:17:04.331980   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <boot dev='hd'/>
	I0708 21:17:04.331991   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <bootmenu enable='no'/>
	I0708 21:17:04.331998   66211 main.go:134] libmachine: (stopped-upgrade-957981)   </os>
	I0708 21:17:04.332007   66211 main.go:134] libmachine: (stopped-upgrade-957981)   <devices>
	I0708 21:17:04.332016   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <disk type='file' device='cdrom'>
	I0708 21:17:04.332041   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/stopped-upgrade-957981/boot2docker.iso'/>
	I0708 21:17:04.332051   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <target dev='hdc' bus='scsi'/>
	I0708 21:17:04.332057   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <readonly/>
	I0708 21:17:04.332065   66211 main.go:134] libmachine: (stopped-upgrade-957981)     </disk>
	I0708 21:17:04.332076   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <disk type='file' device='disk'>
	I0708 21:17:04.332087   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 21:17:04.332105   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/stopped-upgrade-957981/stopped-upgrade-957981.rawdisk'/>
	I0708 21:17:04.332114   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <target dev='hda' bus='virtio'/>
	I0708 21:17:04.332122   66211 main.go:134] libmachine: (stopped-upgrade-957981)     </disk>
	I0708 21:17:04.332132   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <interface type='network'>
	I0708 21:17:04.332138   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <source network='mk-stopped-upgrade-957981'/>
	I0708 21:17:04.332144   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <model type='virtio'/>
	I0708 21:17:04.332152   66211 main.go:134] libmachine: (stopped-upgrade-957981)     </interface>
	I0708 21:17:04.332162   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <interface type='network'>
	I0708 21:17:04.332172   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <source network='default'/>
	I0708 21:17:04.332182   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <model type='virtio'/>
	I0708 21:17:04.332191   66211 main.go:134] libmachine: (stopped-upgrade-957981)     </interface>
	I0708 21:17:04.332199   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <serial type='pty'>
	I0708 21:17:04.332208   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <target port='0'/>
	I0708 21:17:04.332216   66211 main.go:134] libmachine: (stopped-upgrade-957981)     </serial>
	I0708 21:17:04.332227   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <console type='pty'>
	I0708 21:17:04.332236   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <target type='serial' port='0'/>
	I0708 21:17:04.332243   66211 main.go:134] libmachine: (stopped-upgrade-957981)     </console>
	I0708 21:17:04.332252   66211 main.go:134] libmachine: (stopped-upgrade-957981)     <rng model='virtio'>
	I0708 21:17:04.332261   66211 main.go:134] libmachine: (stopped-upgrade-957981)       <backend model='random'>/dev/random</backend>
	I0708 21:17:04.332271   66211 main.go:134] libmachine: (stopped-upgrade-957981)     </rng>
	I0708 21:17:04.332278   66211 main.go:134] libmachine: (stopped-upgrade-957981)     
	I0708 21:17:04.332286   66211 main.go:134] libmachine: (stopped-upgrade-957981)     
	I0708 21:17:04.332295   66211 main.go:134] libmachine: (stopped-upgrade-957981)   </devices>
	I0708 21:17:04.332304   66211 main.go:134] libmachine: (stopped-upgrade-957981) </domain>
	I0708 21:17:04.332310   66211 main.go:134] libmachine: (stopped-upgrade-957981) 
	I0708 21:17:04.337986   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:24:c4:c3 in network default
	I0708 21:17:04.338711   66211 main.go:134] libmachine: (stopped-upgrade-957981) Ensuring networks are active...
	I0708 21:17:04.338726   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:17:04.339543   66211 main.go:134] libmachine: (stopped-upgrade-957981) Ensuring network default is active
	I0708 21:17:04.339876   66211 main.go:134] libmachine: (stopped-upgrade-957981) Ensuring network mk-stopped-upgrade-957981 is active
	I0708 21:17:04.340662   66211 main.go:134] libmachine: (stopped-upgrade-957981) Getting domain xml...
	I0708 21:17:04.341582   66211 main.go:134] libmachine: (stopped-upgrade-957981) Creating domain...
	I0708 21:17:05.769834   66211 main.go:134] libmachine: (stopped-upgrade-957981) Waiting to get IP...
	I0708 21:17:05.770556   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:17:05.771032   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:17:05.771048   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:05.771006   66235 retry.go:31] will retry after 291.835446ms: waiting for machine to come up
	I0708 21:17:06.064800   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:17:06.065356   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:17:06.065391   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:06.065307   66235 retry.go:31] will retry after 300.248552ms: waiting for machine to come up
	I0708 21:17:06.366861   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:17:06.367387   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:17:06.367411   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:06.367281   66235 retry.go:31] will retry after 341.144424ms: waiting for machine to come up
	I0708 21:17:06.709850   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:17:06.710402   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:17:06.710430   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:06.710342   66235 retry.go:31] will retry after 550.463648ms: waiting for machine to come up
	I0708 21:17:07.262195   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:17:07.262684   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:17:07.262703   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:07.262633   66235 retry.go:31] will retry after 612.310213ms: waiting for machine to come up
	I0708 21:17:07.876378   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:17:07.877202   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:17:07.877222   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:07.877152   66235 retry.go:31] will retry after 621.322324ms: waiting for machine to come up
	I0708 21:17:08.499789   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | domain stopped-upgrade-957981 has defined MAC address 52:54:00:95:54:93 in network mk-stopped-upgrade-957981
	I0708 21:17:08.500369   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | unable to find current IP address of domain stopped-upgrade-957981 in network mk-stopped-upgrade-957981
	I0708 21:17:08.500396   66211 main.go:134] libmachine: (stopped-upgrade-957981) DBG | I0708 21:17:08.500328   66235 retry.go:31] will retry after 873.207786ms: waiting for machine to come up
	I0708 21:17:05.641840   65818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.586215321s)
	I0708 21:17:05.641880   65818 crio.go:469] duration metric: took 2.586349129s to extract the tarball
	I0708 21:17:05.641893   65818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 21:17:05.685413   65818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 21:17:05.744937   65818 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 21:17:05.744967   65818 cache_images.go:84] Images are preloaded, skipping loading
	I0708 21:17:05.744978   65818 kubeadm.go:928] updating node { 192.168.50.94 8443 v1.30.2 crio true true} ...
	I0708 21:17:05.745215   65818 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-467273 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 21:17:05.745306   65818 ssh_runner.go:195] Run: crio config
	I0708 21:17:05.811394   65818 cni.go:84] Creating CNI manager for ""
	I0708 21:17:05.811418   65818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:17:05.811430   65818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 21:17:05.811466   65818 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.94 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-467273 NodeName:kubernetes-upgrade-467273 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 21:17:05.811649   65818 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-467273"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 21:17:05.811732   65818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 21:17:05.824196   65818 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 21:17:05.824271   65818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 21:17:05.836156   65818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0708 21:17:05.855362   65818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 21:17:05.874722   65818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0708 21:17:05.893229   65818 ssh_runner.go:195] Run: grep 192.168.50.94	control-plane.minikube.internal$ /etc/hosts
	I0708 21:17:05.897532   65818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 21:17:05.912516   65818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:17:06.056806   65818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:17:06.077436   65818 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273 for IP: 192.168.50.94
	I0708 21:17:06.077461   65818 certs.go:194] generating shared ca certs ...
	I0708 21:17:06.077483   65818 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:17:06.077666   65818 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 21:17:06.077730   65818 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 21:17:06.077743   65818 certs.go:256] generating profile certs ...
	I0708 21:17:06.077862   65818 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/client.key
	I0708 21:17:06.077942   65818 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.key.2cb56847
	I0708 21:17:06.077992   65818 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.key
	I0708 21:17:06.078131   65818 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 21:17:06.078168   65818 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 21:17:06.078183   65818 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 21:17:06.078216   65818 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 21:17:06.078247   65818 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 21:17:06.078281   65818 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 21:17:06.078333   65818 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 21:17:06.079123   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 21:17:06.138405   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 21:17:06.181622   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 21:17:06.218862   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 21:17:06.268274   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0708 21:17:06.311689   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 21:17:06.355727   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 21:17:06.389191   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 21:17:06.419322   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 21:17:06.452493   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 21:17:06.483238   65818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 21:17:06.514427   65818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 21:17:06.537275   65818 ssh_runner.go:195] Run: openssl version
	I0708 21:17:06.544767   65818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 21:17:06.558644   65818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 21:17:06.564300   65818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 21:17:06.564373   65818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 21:17:06.571524   65818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 21:17:06.584579   65818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 21:17:06.597659   65818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:17:06.604351   65818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:17:06.604418   65818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:17:06.612827   65818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 21:17:06.628900   65818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 21:17:06.640837   65818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 21:17:06.646577   65818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 21:17:06.646653   65818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 21:17:06.653378   65818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 21:17:06.665628   65818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 21:17:06.672484   65818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 21:17:06.679906   65818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 21:17:06.687403   65818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 21:17:06.694888   65818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 21:17:06.702128   65818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 21:17:06.709496   65818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 21:17:06.716932   65818 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-467273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.2 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 21:17:06.717037   65818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 21:17:06.717112   65818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 21:17:06.768607   65818 cri.go:89] found id: ""
	I0708 21:17:06.768678   65818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 21:17:06.783916   65818 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 21:17:06.783998   65818 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 21:17:06.784010   65818 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 21:17:06.784068   65818 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 21:17:06.796212   65818 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 21:17:06.797168   65818 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-467273" does not appear in /home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:17:06.797657   65818 kubeconfig.go:62] /home/jenkins/minikube-integration/19195-5988/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-467273" cluster setting kubeconfig missing "kubernetes-upgrade-467273" context setting]
	I0708 21:17:06.798509   65818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/kubeconfig: {Name:mk04a95d9e0722191246d0a7492cb27485d61143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:17:06.847722   65818 kapi.go:59] client config for kubernetes-upgrade-467273: &rest.Config{Host:"https://192.168.50.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/client.crt", KeyFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/client.key", CAFile:"/home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfdf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 21:17:06.848377   65818 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 21:17:06.860624   65818 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.50.94
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-467273"
	   kubeletExtraArgs:
	     node-ip: 192.168.50.94
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.50.94"]
	@@ -33,14 +33,12 @@
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.20.0
	+kubernetesVersion: v1.30.2
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	@@ -52,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I0708 21:17:06.860651   65818 kubeadm.go:1154] stopping kube-system containers ...
	I0708 21:17:06.860669   65818 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0708 21:17:06.860732   65818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 21:17:06.901999   65818 cri.go:89] found id: ""
	I0708 21:17:06.902069   65818 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 21:17:06.924583   65818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:17:06.939153   65818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:17:06.939179   65818 kubeadm.go:156] found existing configuration files:
	
	I0708 21:17:06.939254   65818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 21:17:06.950525   65818 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:17:06.950595   65818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:17:06.962448   65818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 21:17:06.973541   65818 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:17:06.973613   65818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:17:06.984763   65818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 21:17:06.996464   65818 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:17:06.996523   65818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:17:07.008083   65818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 21:17:07.020397   65818 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:17:07.020455   65818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:17:07.032579   65818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:17:07.043996   65818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 21:17:07.200748   65818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 21:17:08.412250   65818 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.211459832s)
	I0708 21:17:08.412282   65818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 21:17:08.652760   65818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 21:17:08.729061   65818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 21:17:08.816303   65818 api_server.go:52] waiting for apiserver process to appear ...
	I0708 21:17:08.816412   65818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:17:09.316516   65818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:17:09.817111   65818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 21:17:09.867883   65818 api_server.go:72] duration metric: took 1.05158262s to wait for apiserver process to appear ...
	I0708 21:17:09.867919   65818 api_server.go:88] waiting for apiserver healthz status ...
	I0708 21:17:09.867957   65818 api_server.go:253] Checking apiserver healthz at https://192.168.50.94:8443/healthz ...
	I0708 21:17:09.868572   65818 api_server.go:269] stopped: https://192.168.50.94:8443/healthz: Get "https://192.168.50.94:8443/healthz": dial tcp 192.168.50.94:8443: connect: connection refused
	
	
	==> CRI-O <==
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.261940608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473433261914322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28508485-be6b-4c48-97e9-9f0c3f1ce8db name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.262661451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f779168-7c6d-496d-b9fb-59439ec28628 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.262792932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f779168-7c6d-496d-b9fb-59439ec28628 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.262980240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce0f4fb108aad8b7e4d5f290e6c38ba959eaff10eb996db4ead860b3da656ffe,PodSandboxId:ffe9c0f59fe34ac7cb5f8a5eba4ecf639cc36b1ef8f9e207e5cfadefae60ca76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472476002732002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe38aa1-fac7-4517-9b33-76f04d2a2f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 56f73b55,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9908f4f99d81652e5638627904e4a861913b81b85f94b5530d7b3eb98fc2c22d,PodSandboxId:a5db9a7e39014ba86f9ff76f744cafba01f3b73c4d3ecc827a95ebe36cd3339d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475436591795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e42c3f-d8a8-4907-b08d-ada6919b55c9,},Annotations:map[string]string{io.kubernetes.container.hash: dc8d0052,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147522b6da453cc658fcf803ab092f1f01ec6299c39beb49ed8aea8fb39183f2,PodSandboxId:2eba620a756036dea40572b4991f9d2e2fecc452569c6a7411509043777e0cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475313095981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l9xmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2723e6e-5bce-43ed-abdb-63120212456f,},Annotations:map[string]string{io.kubernetes.container.hash: d9faa6cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97ff111abfe87fd7f3ae2693205979802fb796c7a252ac101182b0b9045d31f,PodSandboxId:2783ac8e694caf272447f415c358283082e3dcc84c1b1f96c7ab834304944aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1720472474690386153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkvf6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f5061c-fd24-42eb-97b4-e5ec5f57c325,},Annotations:map[string]string{io.kubernetes.container.hash: 85f22e48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd8b4dd934547918e6dd0265b5ab59c0c042fe802122b6dde6fb56c7525b3086,PodSandboxId:d6cdd9e57c5921ad5bdedfa19b2b18a1d993896cde4cf367957b7b0d90367a51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472454513704806,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5915f06682f25360235a0571bf07fcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1751db812059a2b25558db47e64e54db874fc689eaf21c9b94155e5cc6b8ee,PodSandboxId:a0bcc0d0f828fc731627af5ccec3acfbfea977382862bd79796061b5ee3f381e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472454505998034,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017464a8eb9372d81943b1e895114a89,},Annotations:map[string]string{io.kubernetes.container.hash: 9bc29772,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3f064e707e3b8a1df2cecb502630c714a064fa2de639369fd830edb62267c4,PodSandboxId:ac1107c2f5394188a8e9f5bd7236c7285780827027e893ea96e1638362fed98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472454487646172,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182433845698355cb350e0fe26b6032e,},Annotations:map[string]string{io.kubernetes.container.hash: a3816144,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99f8e5897ef06e4cad24cdd6d8f7c18a5b9d5637d7c6312b2816614ae7acb3d,PodSandboxId:cfd2e404c415fef9271b92134f9e0cb1919030310264b87584e9ffd5d9258330,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472454510272686,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a52823041510db1c9cec0ed257a7c73,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f779168-7c6d-496d-b9fb-59439ec28628 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.312287626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a8237b1-09e5-4515-a439-cb8c1c706c69 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.312420671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a8237b1-09e5-4515-a439-cb8c1c706c69 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.314246002Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6aa4a48-c5a8-4119-affa-da2840bfb2bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.314914746Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473433314875551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6aa4a48-c5a8-4119-affa-da2840bfb2bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.315584122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf499428-0a66-403e-9bd9-1fbb2747fad7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.315644698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf499428-0a66-403e-9bd9-1fbb2747fad7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.315954855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce0f4fb108aad8b7e4d5f290e6c38ba959eaff10eb996db4ead860b3da656ffe,PodSandboxId:ffe9c0f59fe34ac7cb5f8a5eba4ecf639cc36b1ef8f9e207e5cfadefae60ca76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472476002732002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe38aa1-fac7-4517-9b33-76f04d2a2f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 56f73b55,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9908f4f99d81652e5638627904e4a861913b81b85f94b5530d7b3eb98fc2c22d,PodSandboxId:a5db9a7e39014ba86f9ff76f744cafba01f3b73c4d3ecc827a95ebe36cd3339d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475436591795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e42c3f-d8a8-4907-b08d-ada6919b55c9,},Annotations:map[string]string{io.kubernetes.container.hash: dc8d0052,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147522b6da453cc658fcf803ab092f1f01ec6299c39beb49ed8aea8fb39183f2,PodSandboxId:2eba620a756036dea40572b4991f9d2e2fecc452569c6a7411509043777e0cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475313095981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l9xmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2723e6e-5bce-43ed-abdb-63120212456f,},Annotations:map[string]string{io.kubernetes.container.hash: d9faa6cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97ff111abfe87fd7f3ae2693205979802fb796c7a252ac101182b0b9045d31f,PodSandboxId:2783ac8e694caf272447f415c358283082e3dcc84c1b1f96c7ab834304944aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1720472474690386153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkvf6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f5061c-fd24-42eb-97b4-e5ec5f57c325,},Annotations:map[string]string{io.kubernetes.container.hash: 85f22e48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd8b4dd934547918e6dd0265b5ab59c0c042fe802122b6dde6fb56c7525b3086,PodSandboxId:d6cdd9e57c5921ad5bdedfa19b2b18a1d993896cde4cf367957b7b0d90367a51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472454513704806,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5915f06682f25360235a0571bf07fcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1751db812059a2b25558db47e64e54db874fc689eaf21c9b94155e5cc6b8ee,PodSandboxId:a0bcc0d0f828fc731627af5ccec3acfbfea977382862bd79796061b5ee3f381e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472454505998034,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017464a8eb9372d81943b1e895114a89,},Annotations:map[string]string{io.kubernetes.container.hash: 9bc29772,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3f064e707e3b8a1df2cecb502630c714a064fa2de639369fd830edb62267c4,PodSandboxId:ac1107c2f5394188a8e9f5bd7236c7285780827027e893ea96e1638362fed98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472454487646172,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182433845698355cb350e0fe26b6032e,},Annotations:map[string]string{io.kubernetes.container.hash: a3816144,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99f8e5897ef06e4cad24cdd6d8f7c18a5b9d5637d7c6312b2816614ae7acb3d,PodSandboxId:cfd2e404c415fef9271b92134f9e0cb1919030310264b87584e9ffd5d9258330,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472454510272686,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a52823041510db1c9cec0ed257a7c73,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf499428-0a66-403e-9bd9-1fbb2747fad7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.368171290Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a4d1d63-aabf-41fc-ae0c-34248ce29bbd name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.368308459Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a4d1d63-aabf-41fc-ae0c-34248ce29bbd name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.371993450Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba0f0206-49aa-4fa2-a07b-a1902e41d449 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.372417215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473433372392452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba0f0206-49aa-4fa2-a07b-a1902e41d449 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.373218727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e760e77-c014-4044-85b2-a0c8c5941fc9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.373318570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e760e77-c014-4044-85b2-a0c8c5941fc9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.373503924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce0f4fb108aad8b7e4d5f290e6c38ba959eaff10eb996db4ead860b3da656ffe,PodSandboxId:ffe9c0f59fe34ac7cb5f8a5eba4ecf639cc36b1ef8f9e207e5cfadefae60ca76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472476002732002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe38aa1-fac7-4517-9b33-76f04d2a2f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 56f73b55,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9908f4f99d81652e5638627904e4a861913b81b85f94b5530d7b3eb98fc2c22d,PodSandboxId:a5db9a7e39014ba86f9ff76f744cafba01f3b73c4d3ecc827a95ebe36cd3339d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475436591795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e42c3f-d8a8-4907-b08d-ada6919b55c9,},Annotations:map[string]string{io.kubernetes.container.hash: dc8d0052,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147522b6da453cc658fcf803ab092f1f01ec6299c39beb49ed8aea8fb39183f2,PodSandboxId:2eba620a756036dea40572b4991f9d2e2fecc452569c6a7411509043777e0cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475313095981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l9xmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2723e6e-5bce-43ed-abdb-63120212456f,},Annotations:map[string]string{io.kubernetes.container.hash: d9faa6cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97ff111abfe87fd7f3ae2693205979802fb796c7a252ac101182b0b9045d31f,PodSandboxId:2783ac8e694caf272447f415c358283082e3dcc84c1b1f96c7ab834304944aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1720472474690386153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkvf6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f5061c-fd24-42eb-97b4-e5ec5f57c325,},Annotations:map[string]string{io.kubernetes.container.hash: 85f22e48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd8b4dd934547918e6dd0265b5ab59c0c042fe802122b6dde6fb56c7525b3086,PodSandboxId:d6cdd9e57c5921ad5bdedfa19b2b18a1d993896cde4cf367957b7b0d90367a51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472454513704806,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5915f06682f25360235a0571bf07fcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1751db812059a2b25558db47e64e54db874fc689eaf21c9b94155e5cc6b8ee,PodSandboxId:a0bcc0d0f828fc731627af5ccec3acfbfea977382862bd79796061b5ee3f381e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472454505998034,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017464a8eb9372d81943b1e895114a89,},Annotations:map[string]string{io.kubernetes.container.hash: 9bc29772,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3f064e707e3b8a1df2cecb502630c714a064fa2de639369fd830edb62267c4,PodSandboxId:ac1107c2f5394188a8e9f5bd7236c7285780827027e893ea96e1638362fed98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472454487646172,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182433845698355cb350e0fe26b6032e,},Annotations:map[string]string{io.kubernetes.container.hash: a3816144,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99f8e5897ef06e4cad24cdd6d8f7c18a5b9d5637d7c6312b2816614ae7acb3d,PodSandboxId:cfd2e404c415fef9271b92134f9e0cb1919030310264b87584e9ffd5d9258330,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472454510272686,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a52823041510db1c9cec0ed257a7c73,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e760e77-c014-4044-85b2-a0c8c5941fc9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.431636890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fa640fb-3c89-4169-affe-408fc67a9a94 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.431735085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fa640fb-3c89-4169-affe-408fc67a9a94 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.433875172Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1be33c93-728f-4028-a328-281c8830984d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.434659933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473433434621534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1be33c93-728f-4028-a328-281c8830984d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.435611108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33075136-9a8e-4a0e-9f6d-ca8b7ad09b78 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.435710505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33075136-9a8e-4a0e-9f6d-ca8b7ad09b78 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:13 embed-certs-239931 crio[726]: time="2024-07-08 21:17:13.436051029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce0f4fb108aad8b7e4d5f290e6c38ba959eaff10eb996db4ead860b3da656ffe,PodSandboxId:ffe9c0f59fe34ac7cb5f8a5eba4ecf639cc36b1ef8f9e207e5cfadefae60ca76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472476002732002,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abe38aa1-fac7-4517-9b33-76f04d2a2f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 56f73b55,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9908f4f99d81652e5638627904e4a861913b81b85f94b5530d7b3eb98fc2c22d,PodSandboxId:a5db9a7e39014ba86f9ff76f744cafba01f3b73c4d3ecc827a95ebe36cd3339d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475436591795,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e42c3f-d8a8-4907-b08d-ada6919b55c9,},Annotations:map[string]string{io.kubernetes.container.hash: dc8d0052,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147522b6da453cc658fcf803ab092f1f01ec6299c39beb49ed8aea8fb39183f2,PodSandboxId:2eba620a756036dea40572b4991f9d2e2fecc452569c6a7411509043777e0cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472475313095981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-l9xmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2723e6e-5bce-43ed-abdb-63120212456f,},Annotations:map[string]string{io.kubernetes.container.hash: d9faa6cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97ff111abfe87fd7f3ae2693205979802fb796c7a252ac101182b0b9045d31f,PodSandboxId:2783ac8e694caf272447f415c358283082e3dcc84c1b1f96c7ab834304944aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1720472474690386153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkvf6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f5061c-fd24-42eb-97b4-e5ec5f57c325,},Annotations:map[string]string{io.kubernetes.container.hash: 85f22e48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd8b4dd934547918e6dd0265b5ab59c0c042fe802122b6dde6fb56c7525b3086,PodSandboxId:d6cdd9e57c5921ad5bdedfa19b2b18a1d993896cde4cf367957b7b0d90367a51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472454513704806,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5915f06682f25360235a0571bf07fcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c1751db812059a2b25558db47e64e54db874fc689eaf21c9b94155e5cc6b8ee,PodSandboxId:a0bcc0d0f828fc731627af5ccec3acfbfea977382862bd79796061b5ee3f381e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472454505998034,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017464a8eb9372d81943b1e895114a89,},Annotations:map[string]string{io.kubernetes.container.hash: 9bc29772,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3f064e707e3b8a1df2cecb502630c714a064fa2de639369fd830edb62267c4,PodSandboxId:ac1107c2f5394188a8e9f5bd7236c7285780827027e893ea96e1638362fed98f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472454487646172,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182433845698355cb350e0fe26b6032e,},Annotations:map[string]string{io.kubernetes.container.hash: a3816144,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99f8e5897ef06e4cad24cdd6d8f7c18a5b9d5637d7c6312b2816614ae7acb3d,PodSandboxId:cfd2e404c415fef9271b92134f9e0cb1919030310264b87584e9ffd5d9258330,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472454510272686,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-239931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a52823041510db1c9cec0ed257a7c73,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33075136-9a8e-4a0e-9f6d-ca8b7ad09b78 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ce0f4fb108aad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   ffe9c0f59fe34       storage-provisioner
	9908f4f99d816       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   a5db9a7e39014       coredns-7db6d8ff4d-qbqkx
	147522b6da453       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   2eba620a75603       coredns-7db6d8ff4d-l9xmm
	c97ff111abfe8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   15 minutes ago      Running             kube-proxy                0                   2783ac8e694ca       kube-proxy-vkvf6
	cd8b4dd934547       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   16 minutes ago      Running             kube-scheduler            2                   d6cdd9e57c592       kube-scheduler-embed-certs-239931
	d99f8e5897ef0       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   16 minutes ago      Running             kube-controller-manager   2                   cfd2e404c415f       kube-controller-manager-embed-certs-239931
	5c1751db81205       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   16 minutes ago      Running             kube-apiserver            2                   a0bcc0d0f828f       kube-apiserver-embed-certs-239931
	1b3f064e707e3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   ac1107c2f5394       etcd-embed-certs-239931
	
	
	==> coredns [147522b6da453cc658fcf803ab092f1f01ec6299c39beb49ed8aea8fb39183f2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9908f4f99d81652e5638627904e4a861913b81b85f94b5530d7b3eb98fc2c22d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-239931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-239931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=embed-certs-239931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T21_01_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 21:00:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-239931
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 21:17:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 21:16:40 +0000   Mon, 08 Jul 2024 21:00:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 21:16:40 +0000   Mon, 08 Jul 2024 21:00:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 21:16:40 +0000   Mon, 08 Jul 2024 21:00:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 21:16:40 +0000   Mon, 08 Jul 2024 21:00:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.126
	  Hostname:    embed-certs-239931
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b0035653c9b0423ebfd272e326ad42bb
	  System UUID:                b0035653-c9b0-423e-bfd2-72e326ad42bb
	  Boot ID:                    1bcf4981-2530-463c-acb0-0ffab41f1d26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-l9xmm                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-qbqkx                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-239931                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-239931             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-239931    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-vkvf6                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-239931             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-f2dkn               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-239931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-239931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-239931 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-239931 event: Registered Node embed-certs-239931 in Controller
	
	
	==> dmesg <==
	[  +0.051333] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041301] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.548936] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.249132] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.613290] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.871224] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.063897] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060084] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.201411] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.141133] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.286859] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.511273] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.065048] kauditd_printk_skb: 130 callbacks suppressed
	[Jul 8 20:56] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +5.593021] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.026092] kauditd_printk_skb: 84 callbacks suppressed
	[Jul 8 21:00] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.762065] systemd-fstab-generator[3558]: Ignoring "noauto" option for root device
	[  +4.732631] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.351170] systemd-fstab-generator[3885]: Ignoring "noauto" option for root device
	[Jul 8 21:01] systemd-fstab-generator[4086]: Ignoring "noauto" option for root device
	[  +0.109028] kauditd_printk_skb: 14 callbacks suppressed
	[Jul 8 21:02] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [1b3f064e707e3b8a1df2cecb502630c714a064fa2de639369fd830edb62267c4] <==
	{"level":"info","ts":"2024-07-08T21:00:55.740978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 became leader at term 2"}
	{"level":"info","ts":"2024-07-08T21:00:55.741002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2456aadc51424cb5 elected leader 2456aadc51424cb5 at term 2"}
	{"level":"info","ts":"2024-07-08T21:00:55.745018Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2456aadc51424cb5","local-member-attributes":"{Name:embed-certs-239931 ClientURLs:[https://192.168.61.126:2379]}","request-path":"/0/members/2456aadc51424cb5/attributes","cluster-id":"c6330389cea17d04","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T21:00:55.745072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T21:00:55.745409Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:00:55.745798Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T21:00:55.75161Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.126:2379"}
	{"level":"info","ts":"2024-07-08T21:00:55.751719Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c6330389cea17d04","local-member-id":"2456aadc51424cb5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:00:55.751845Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:00:55.751882Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:00:55.755639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T21:00:55.763802Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T21:00:55.763848Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T21:10:55.79924Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":678}
	{"level":"info","ts":"2024-07-08T21:10:55.813877Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":678,"took":"13.472886ms","hash":3598047783,"current-db-size-bytes":2142208,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2142208,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-08T21:10:55.813941Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3598047783,"revision":678,"compact-revision":-1}
	{"level":"info","ts":"2024-07-08T21:12:36.115435Z","caller":"traceutil/trace.go:171","msg":"trace[1610840811] linearizableReadLoop","detail":"{readStateIndex:1155; appliedIndex:1154; }","duration":"121.505854ms","start":"2024-07-08T21:12:35.993893Z","end":"2024-07-08T21:12:36.115399Z","steps":["trace[1610840811] 'read index received'  (duration: 121.341686ms)","trace[1610840811] 'applied index is now lower than readState.Index'  (duration: 163.593µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T21:12:36.115713Z","caller":"traceutil/trace.go:171","msg":"trace[223698995] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"125.352472ms","start":"2024-07-08T21:12:35.99034Z","end":"2024-07-08T21:12:36.115692Z","steps":["trace[223698995] 'process raft request'  (duration: 124.93389ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T21:12:36.115898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.124225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-08T21:12:36.116941Z","caller":"traceutil/trace.go:171","msg":"trace[912784098] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1005; }","duration":"123.248225ms","start":"2024-07-08T21:12:35.993675Z","end":"2024-07-08T21:12:36.116923Z","steps":["trace[912784098] 'agreement among raft nodes before linearized reading'  (duration: 122.122607ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T21:12:36.526477Z","caller":"traceutil/trace.go:171","msg":"trace[1716942168] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"225.186548ms","start":"2024-07-08T21:12:36.301275Z","end":"2024-07-08T21:12:36.526462Z","steps":["trace[1716942168] 'process raft request'  (duration: 224.82093ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T21:15:55.808656Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2024-07-08T21:15:55.812854Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":922,"took":"3.287707ms","hash":3271043239,"current-db-size-bytes":2142208,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-08T21:15:55.812968Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3271043239,"revision":922,"compact-revision":678}
	{"level":"info","ts":"2024-07-08T21:17:07.774219Z","caller":"traceutil/trace.go:171","msg":"trace[1890232992] transaction","detail":"{read_only:false; response_revision:1225; number_of_response:1; }","duration":"144.001396ms","start":"2024-07-08T21:17:07.630123Z","end":"2024-07-08T21:17:07.774125Z","steps":["trace[1890232992] 'process raft request'  (duration: 143.561038ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:17:13 up 21 min,  0 users,  load average: 0.00, 0.08, 0.09
	Linux embed-certs-239931 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5c1751db812059a2b25558db47e64e54db874fc689eaf21c9b94155e5cc6b8ee] <==
	I0708 21:11:58.487172       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:13:58.486799       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:13:58.486900       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:13:58.486909       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:13:58.488146       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:13:58.488281       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:13:58.488290       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:15:57.490347       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:15:57.490788       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0708 21:15:58.491011       1 handler_proxy.go:93] no RequestInfo found in the context
	W0708 21:15:58.491044       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:15:58.491246       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:15:58.491275       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0708 21:15:58.491361       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:15:58.492593       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:16:58.491810       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:16:58.492093       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:16:58.492126       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:16:58.493079       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:16:58.493149       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:16:58.493157       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d99f8e5897ef06e4cad24cdd6d8f7c18a5b9d5637d7c6312b2816614ae7acb3d] <==
	E0708 21:11:43.359150       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:11:43.979968       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:12:13.365379       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:12:13.990715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0708 21:12:23.537181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="131.514µs"
	I0708 21:12:36.578637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="222.556µs"
	E0708 21:12:43.371480       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:12:43.998539       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:13:13.377414       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:13:14.012869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:13:43.384567       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:13:44.023340       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:14:13.389671       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:14:14.032074       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:14:43.395335       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:14:44.041530       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:15:13.401163       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:15:14.050084       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:15:43.407190       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:15:44.059044       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:16:13.413013       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:16:14.067642       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:16:43.419516       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:16:44.076201       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:17:13.428055       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	
	
	==> kube-proxy [c97ff111abfe87fd7f3ae2693205979802fb796c7a252ac101182b0b9045d31f] <==
	I0708 21:01:16.006453       1 server_linux.go:69] "Using iptables proxy"
	I0708 21:01:16.032946       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.126"]
	I0708 21:01:16.212106       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 21:01:16.212331       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 21:01:16.212694       1 server_linux.go:165] "Using iptables Proxier"
	I0708 21:01:16.230866       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 21:01:16.231489       1 server.go:872] "Version info" version="v1.30.2"
	I0708 21:01:16.231537       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 21:01:16.236829       1 config.go:319] "Starting node config controller"
	I0708 21:01:16.236919       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 21:01:16.238031       1 config.go:192] "Starting service config controller"
	I0708 21:01:16.238628       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 21:01:16.239000       1 config.go:101] "Starting endpoint slice config controller"
	I0708 21:01:16.239051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 21:01:16.337978       1 shared_informer.go:320] Caches are synced for node config
	I0708 21:01:16.339175       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 21:01:16.339350       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [cd8b4dd934547918e6dd0265b5ab59c0c042fe802122b6dde6fb56c7525b3086] <==
	W0708 21:00:57.500844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 21:00:57.500873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 21:00:58.312117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 21:00:58.312248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 21:00:58.359812       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 21:00:58.359912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 21:00:58.438607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 21:00:58.438658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 21:00:58.538948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 21:00:58.538978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 21:00:58.553269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 21:00:58.553476       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 21:00:58.580239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 21:00:58.580788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 21:00:58.580535       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 21:00:58.580944       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 21:00:58.593164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 21:00:58.593214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 21:00:58.598826       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0708 21:00:58.598879       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0708 21:00:58.690185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 21:00:58.690398       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 21:00:58.760082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 21:00:58.760191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0708 21:01:00.585252       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 21:15:00 embed-certs-239931 kubelet[3891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:15:03 embed-certs-239931 kubelet[3891]: E0708 21:15:03.518725    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:15:18 embed-certs-239931 kubelet[3891]: E0708 21:15:18.521608    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:15:31 embed-certs-239931 kubelet[3891]: E0708 21:15:31.518673    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:15:43 embed-certs-239931 kubelet[3891]: E0708 21:15:43.518632    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:15:56 embed-certs-239931 kubelet[3891]: E0708 21:15:56.522058    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:16:00 embed-certs-239931 kubelet[3891]: E0708 21:16:00.537964    3891 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:16:00 embed-certs-239931 kubelet[3891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:16:00 embed-certs-239931 kubelet[3891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:16:00 embed-certs-239931 kubelet[3891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:16:00 embed-certs-239931 kubelet[3891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:16:10 embed-certs-239931 kubelet[3891]: E0708 21:16:10.519877    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:16:24 embed-certs-239931 kubelet[3891]: E0708 21:16:24.518570    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:16:37 embed-certs-239931 kubelet[3891]: E0708 21:16:37.518696    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:16:48 embed-certs-239931 kubelet[3891]: E0708 21:16:48.518993    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:16:59 embed-certs-239931 kubelet[3891]: E0708 21:16:59.518898    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	Jul 08 21:17:00 embed-certs-239931 kubelet[3891]: E0708 21:17:00.542202    3891 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:17:00 embed-certs-239931 kubelet[3891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:17:00 embed-certs-239931 kubelet[3891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:17:00 embed-certs-239931 kubelet[3891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:17:00 embed-certs-239931 kubelet[3891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:17:10 embed-certs-239931 kubelet[3891]: E0708 21:17:10.542446    3891 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 08 21:17:10 embed-certs-239931 kubelet[3891]: E0708 21:17:10.543151    3891 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 08 21:17:10 embed-certs-239931 kubelet[3891]: E0708 21:17:10.543718    3891 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7f2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recur
siveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:fals
e,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-f2dkn_kube-system(1d3c3e8e-356d-40b9-8add-35eec096e9f0): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 08 21:17:10 embed-certs-239931 kubelet[3891]: E0708 21:17:10.543958    3891 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-f2dkn" podUID="1d3c3e8e-356d-40b9-8add-35eec096e9f0"
	
	
	==> storage-provisioner [ce0f4fb108aad8b7e4d5f290e6c38ba959eaff10eb996db4ead860b3da656ffe] <==
	I0708 21:01:16.218860       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 21:01:16.245377       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 21:01:16.245501       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 21:01:16.268137       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 21:01:16.270696       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-239931_21cca3f3-9f2a-4eca-bab0-e680410695f3!
	I0708 21:01:16.272271       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a99a8de8-7120-4951-95cc-51036a51cc59", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-239931_21cca3f3-9f2a-4eca-bab0-e680410695f3 became leader
	I0708 21:01:16.371621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-239931_21cca3f3-9f2a-4eca-bab0-e680410695f3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-239931 -n embed-certs-239931
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-239931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-f2dkn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-239931 describe pod metrics-server-569cc877fc-f2dkn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-239931 describe pod metrics-server-569cc877fc-f2dkn: exit status 1 (86.10331ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-f2dkn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-239931 describe pod metrics-server-569cc877fc-f2dkn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (415.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (390.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-028021 -n no-preload-028021
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-08 21:16:59.586516867 +0000 UTC m=+6479.499394664
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-028021 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-028021 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.776µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-028021 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028021 -n no-preload-028021
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-028021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-028021 logs -n 25: (1.374882749s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-059722                                 | cert-options-059722          | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:47 UTC |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:47 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-028021             | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-914355             | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC | 08 Jul 24 20:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 20:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-239931            | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112887                              | cert-expiration-112887       | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-733920 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-733920                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:49 UTC | 08 Jul 24 20:50 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028021                  | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028021                                   | no-preload-028021            | jenkins | v1.33.1 | 08 Jul 24 20:50 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071971  | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-239931                 | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-239931                                  | embed-certs-239931           | jenkins | v1.33.1 | 08 Jul 24 20:51 UTC | 08 Jul 24 21:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071971       | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071971 | jenkins | v1.33.1 | 08 Jul 24 20:53 UTC | 08 Jul 24 21:01 UTC |
	|         | default-k8s-diff-port-071971                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-914355                              | old-k8s-version-914355       | jenkins | v1.33.1 | 08 Jul 24 21:12 UTC | 08 Jul 24 21:12 UTC |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:16 UTC | 08 Jul 24 21:16 UTC |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273    | jenkins | v1.33.1 | 08 Jul 24 21:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 21:16:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 21:16:40.349248   65818 out.go:291] Setting OutFile to fd 1 ...
	I0708 21:16:40.349477   65818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 21:16:40.349485   65818 out.go:304] Setting ErrFile to fd 2...
	I0708 21:16:40.349488   65818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 21:16:40.349673   65818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 21:16:40.350191   65818 out.go:298] Setting JSON to false
	I0708 21:16:40.351124   65818 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7149,"bootTime":1720466251,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 21:16:40.351187   65818 start.go:139] virtualization: kvm guest
	I0708 21:16:40.353744   65818 out.go:177] * [kubernetes-upgrade-467273] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 21:16:40.355482   65818 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 21:16:40.355487   65818 notify.go:220] Checking for updates...
	I0708 21:16:40.356892   65818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 21:16:40.358193   65818 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:16:40.359871   65818 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:16:40.361243   65818 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 21:16:40.362657   65818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 21:16:40.364440   65818 config.go:182] Loaded profile config "kubernetes-upgrade-467273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0708 21:16:40.364894   65818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:16:40.364967   65818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:16:40.380880   65818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43709
	I0708 21:16:40.381255   65818 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:16:40.381771   65818 main.go:141] libmachine: Using API Version  1
	I0708 21:16:40.381788   65818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:16:40.382121   65818 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:16:40.382329   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:16:40.382586   65818 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 21:16:40.382865   65818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:16:40.382902   65818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:16:40.398623   65818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0708 21:16:40.399212   65818 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:16:40.399749   65818 main.go:141] libmachine: Using API Version  1
	I0708 21:16:40.399774   65818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:16:40.400146   65818 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:16:40.400359   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:16:40.440293   65818 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 21:16:40.441764   65818 start.go:297] selected driver: kvm2
	I0708 21:16:40.441795   65818 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-467273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-467273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 21:16:40.441936   65818 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 21:16:40.442766   65818 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 21:16:40.442840   65818 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 21:16:40.458809   65818 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 21:16:40.459189   65818 cni.go:84] Creating CNI manager for ""
	I0708 21:16:40.459203   65818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 21:16:40.459234   65818 start.go:340] cluster config:
	{Name:kubernetes-upgrade-467273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-467273 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 21:16:40.459349   65818 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 21:16:40.461322   65818 out.go:177] * Starting "kubernetes-upgrade-467273" primary control-plane node in "kubernetes-upgrade-467273" cluster
	I0708 21:16:40.462658   65818 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 21:16:40.462710   65818 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 21:16:40.462723   65818 cache.go:56] Caching tarball of preloaded images
	I0708 21:16:40.462825   65818 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 21:16:40.462872   65818 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 21:16:40.462969   65818 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kubernetes-upgrade-467273/config.json ...
	I0708 21:16:40.463173   65818 start.go:360] acquireMachinesLock for kubernetes-upgrade-467273: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 21:16:40.463227   65818 start.go:364] duration metric: took 30.293µs to acquireMachinesLock for "kubernetes-upgrade-467273"
	I0708 21:16:40.463248   65818 start.go:96] Skipping create...Using existing machine configuration
	I0708 21:16:40.463260   65818 fix.go:54] fixHost starting: 
	I0708 21:16:40.463540   65818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:16:40.463576   65818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:16:40.478711   65818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0708 21:16:40.479202   65818 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:16:40.479762   65818 main.go:141] libmachine: Using API Version  1
	I0708 21:16:40.479793   65818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:16:40.480161   65818 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:16:40.480365   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	I0708 21:16:40.480538   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .GetState
	I0708 21:16:40.482343   65818 fix.go:112] recreateIfNeeded on kubernetes-upgrade-467273: state=Stopped err=<nil>
	I0708 21:16:40.482381   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .DriverName
	W0708 21:16:40.482587   65818 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 21:16:40.484314   65818 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-467273" ...
	I0708 21:16:40.485645   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Calling .Start
	I0708 21:16:40.485884   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Ensuring networks are active...
	I0708 21:16:40.486705   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Ensuring network default is active
	I0708 21:16:40.487143   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Ensuring network mk-kubernetes-upgrade-467273 is active
	I0708 21:16:40.487680   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Getting domain xml...
	I0708 21:16:40.488568   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Creating domain...
	I0708 21:16:41.784023   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) Waiting to get IP...
	I0708 21:16:41.785010   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:41.785542   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:41.785616   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:41.785522   65854 retry.go:31] will retry after 268.504695ms: waiting for machine to come up
	I0708 21:16:42.056427   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:42.057025   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:42.057052   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:42.056978   65854 retry.go:31] will retry after 249.103177ms: waiting for machine to come up
	I0708 21:16:42.307389   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:42.307889   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:42.307920   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:42.307841   65854 retry.go:31] will retry after 424.737317ms: waiting for machine to come up
	I0708 21:16:42.734052   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:42.734538   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:42.734567   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:42.734493   65854 retry.go:31] will retry after 568.533538ms: waiting for machine to come up
	I0708 21:16:43.304170   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:43.304700   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:43.304727   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:43.304648   65854 retry.go:31] will retry after 628.88722ms: waiting for machine to come up
	I0708 21:16:43.935067   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:43.935542   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:43.935569   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:43.935433   65854 retry.go:31] will retry after 677.792186ms: waiting for machine to come up
	I0708 21:16:44.615147   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:44.615598   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:44.615645   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:44.615549   65854 retry.go:31] will retry after 830.992672ms: waiting for machine to come up
	I0708 21:16:45.448511   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:45.449093   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:45.449123   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:45.449036   65854 retry.go:31] will retry after 952.548063ms: waiting for machine to come up
	I0708 21:16:46.403545   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:46.404079   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:46.404104   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:46.404031   65854 retry.go:31] will retry after 1.430254396s: waiting for machine to come up
	I0708 21:16:47.836944   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:47.837383   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:47.837414   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:47.837343   65854 retry.go:31] will retry after 1.461587196s: waiting for machine to come up
	I0708 21:16:49.300581   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:49.301051   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:49.301078   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:49.301006   65854 retry.go:31] will retry after 2.07653499s: waiting for machine to come up
	I0708 21:16:51.378821   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:51.379369   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:51.379395   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:51.379315   65854 retry.go:31] will retry after 3.610447855s: waiting for machine to come up
	I0708 21:16:54.992605   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | domain kubernetes-upgrade-467273 has defined MAC address 52:54:00:16:6e:d6 in network mk-kubernetes-upgrade-467273
	I0708 21:16:54.992990   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | unable to find current IP address of domain kubernetes-upgrade-467273 in network mk-kubernetes-upgrade-467273
	I0708 21:16:54.993024   65818 main.go:141] libmachine: (kubernetes-upgrade-467273) DBG | I0708 21:16:54.992911   65854 retry.go:31] will retry after 3.280114322s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.295630311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473420295608688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11f66070-4dab-4c54-8e01-423ca295959a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.296447229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1119c299-631d-4fc5-b8a6-39c2f3e21add name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.296521491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1119c299-631d-4fc5-b8a6-39c2f3e21add name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.296796339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472252236650146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed8483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec44202be050bdf4100a4056a19c9b444c0320568f8702a9b253d5cc8df2f4,PodSandboxId:b77db01fb1c53435402ee97d563b2b45bffac06b26f3ee070fd81df84e7c5f02,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720472230092260337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b,},Annotations:map[string]string{io.kubernetes.container.hash: aa2ae0f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46,PodSandboxId:d9e968743a97793cde784e402f4baebd906ce873c157650203c43116c4a77e2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472229156041101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb6cr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1efedb-97f2-4bf0-a182-b8329b3bc6f1,},Annotations:map[string]string{io.kubernetes.container.hash: 93921204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b,PodSandboxId:65aaa2f6076bf5e061050e568401625c2540826b9913f8ff916c3b4665638fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720472221456007652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa04234-ad5a-4a24-b6
a5-152933bb12b9,},Annotations:map[string]string{io.kubernetes.container.hash: b2ab9584,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720472221436827037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed84
83,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919,PodSandboxId:325368b4e3b1a494eb13c5da624041bd17571bd421621f004f13602791fd3656,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472216670177822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf4da22a5727a780be32a5a7e7c4cdb,},Annotations:map[string]string{io.kuber
netes.container.hash: 9942d3a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a,PodSandboxId:a85e18e661f9441d351d5e36f2d09921a0be38e0bfd39009eb43cc0d8e7795b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472216695156121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381e23949c09eb6afe9825084993c3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4,PodSandboxId:c2c699d466b8db7053c9f17f7121b9f0e8525df66a21e105d8ccf229ced8690f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472216673692656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d55ef8ee96afe42a43026500a04e191,},Annotations:map[string]string{io.kubernetes.container.hash: d3bcb
c66,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06,PodSandboxId:c770e062d6dfe7ed846741a9b5bfd2cc5a9155cafa9f29146e2a409f7a8e4e14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472216653216479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36324e1aa77d8550081ad04dbe675433,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1119c299-631d-4fc5-b8a6-39c2f3e21add name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.340344434Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5617f1e0-ab5e-4960-9b4b-489f617ad417 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.340442136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5617f1e0-ab5e-4960-9b4b-489f617ad417 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.341407518Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66a2f18b-d6e8-4df3-b0ad-a89c096c184b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.341887109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473420341863297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66a2f18b-d6e8-4df3-b0ad-a89c096c184b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.342400118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3fd3847-1b77-45b6-a373-07cba2cad623 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.342476298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3fd3847-1b77-45b6-a373-07cba2cad623 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.342730321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472252236650146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed8483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec44202be050bdf4100a4056a19c9b444c0320568f8702a9b253d5cc8df2f4,PodSandboxId:b77db01fb1c53435402ee97d563b2b45bffac06b26f3ee070fd81df84e7c5f02,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720472230092260337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b,},Annotations:map[string]string{io.kubernetes.container.hash: aa2ae0f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46,PodSandboxId:d9e968743a97793cde784e402f4baebd906ce873c157650203c43116c4a77e2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472229156041101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb6cr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1efedb-97f2-4bf0-a182-b8329b3bc6f1,},Annotations:map[string]string{io.kubernetes.container.hash: 93921204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b,PodSandboxId:65aaa2f6076bf5e061050e568401625c2540826b9913f8ff916c3b4665638fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720472221456007652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa04234-ad5a-4a24-b6
a5-152933bb12b9,},Annotations:map[string]string{io.kubernetes.container.hash: b2ab9584,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720472221436827037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed84
83,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919,PodSandboxId:325368b4e3b1a494eb13c5da624041bd17571bd421621f004f13602791fd3656,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472216670177822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf4da22a5727a780be32a5a7e7c4cdb,},Annotations:map[string]string{io.kuber
netes.container.hash: 9942d3a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a,PodSandboxId:a85e18e661f9441d351d5e36f2d09921a0be38e0bfd39009eb43cc0d8e7795b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472216695156121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381e23949c09eb6afe9825084993c3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4,PodSandboxId:c2c699d466b8db7053c9f17f7121b9f0e8525df66a21e105d8ccf229ced8690f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472216673692656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d55ef8ee96afe42a43026500a04e191,},Annotations:map[string]string{io.kubernetes.container.hash: d3bcb
c66,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06,PodSandboxId:c770e062d6dfe7ed846741a9b5bfd2cc5a9155cafa9f29146e2a409f7a8e4e14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472216653216479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36324e1aa77d8550081ad04dbe675433,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3fd3847-1b77-45b6-a373-07cba2cad623 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.384118259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f69fbf9c-8447-42c1-9abb-e0475deea192 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.384216890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f69fbf9c-8447-42c1-9abb-e0475deea192 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.385664754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=feea0cd3-a90c-4e12-9526-4e1925a81930 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.386233952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473420386210554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=feea0cd3-a90c-4e12-9526-4e1925a81930 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.386923766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66bc9312-67e7-4cd8-ab7c-55902167f624 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.387010125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66bc9312-67e7-4cd8-ab7c-55902167f624 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.387270468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472252236650146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed8483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec44202be050bdf4100a4056a19c9b444c0320568f8702a9b253d5cc8df2f4,PodSandboxId:b77db01fb1c53435402ee97d563b2b45bffac06b26f3ee070fd81df84e7c5f02,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720472230092260337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b,},Annotations:map[string]string{io.kubernetes.container.hash: aa2ae0f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46,PodSandboxId:d9e968743a97793cde784e402f4baebd906ce873c157650203c43116c4a77e2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472229156041101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb6cr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1efedb-97f2-4bf0-a182-b8329b3bc6f1,},Annotations:map[string]string{io.kubernetes.container.hash: 93921204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b,PodSandboxId:65aaa2f6076bf5e061050e568401625c2540826b9913f8ff916c3b4665638fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720472221456007652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa04234-ad5a-4a24-b6
a5-152933bb12b9,},Annotations:map[string]string{io.kubernetes.container.hash: b2ab9584,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720472221436827037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed84
83,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919,PodSandboxId:325368b4e3b1a494eb13c5da624041bd17571bd421621f004f13602791fd3656,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472216670177822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf4da22a5727a780be32a5a7e7c4cdb,},Annotations:map[string]string{io.kuber
netes.container.hash: 9942d3a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a,PodSandboxId:a85e18e661f9441d351d5e36f2d09921a0be38e0bfd39009eb43cc0d8e7795b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472216695156121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381e23949c09eb6afe9825084993c3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4,PodSandboxId:c2c699d466b8db7053c9f17f7121b9f0e8525df66a21e105d8ccf229ced8690f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472216673692656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d55ef8ee96afe42a43026500a04e191,},Annotations:map[string]string{io.kubernetes.container.hash: d3bcb
c66,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06,PodSandboxId:c770e062d6dfe7ed846741a9b5bfd2cc5a9155cafa9f29146e2a409f7a8e4e14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472216653216479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36324e1aa77d8550081ad04dbe675433,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66bc9312-67e7-4cd8-ab7c-55902167f624 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.432676819Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aca2b626-165b-426e-bde5-d98d62f6d8c1 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.432763070Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aca2b626-165b-426e-bde5-d98d62f6d8c1 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.434091483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11231d59-80f1-4ffa-833b-068d789f2a01 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.434440134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473420434417290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11231d59-80f1-4ffa-833b-068d789f2a01 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.435151382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cdd41c6-3b3d-4356-966e-5c369902ca9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.435224209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cdd41c6-3b3d-4356-966e-5c369902ca9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:17:00 no-preload-028021 crio[719]: time="2024-07-08 21:17:00.435427481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472252236650146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed8483,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec44202be050bdf4100a4056a19c9b444c0320568f8702a9b253d5cc8df2f4,PodSandboxId:b77db01fb1c53435402ee97d563b2b45bffac06b26f3ee070fd81df84e7c5f02,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1720472230092260337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b,},Annotations:map[string]string{io.kubernetes.container.hash: aa2ae0f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46,PodSandboxId:d9e968743a97793cde784e402f4baebd906ce873c157650203c43116c4a77e2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472229156041101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb6cr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1efedb-97f2-4bf0-a182-b8329b3bc6f1,},Annotations:map[string]string{io.kubernetes.container.hash: 93921204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b,PodSandboxId:65aaa2f6076bf5e061050e568401625c2540826b9913f8ff916c3b4665638fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1720472221456007652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa04234-ad5a-4a24-b6
a5-152933bb12b9,},Annotations:map[string]string{io.kubernetes.container.hash: b2ab9584,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a,PodSandboxId:62fbc1cf8e9ce3e3f1f80513ef0befd1d80ace76f57c13c6b0722373165f4b43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1720472221436827037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca0a23e-8d09-4541-b80b-87242bed84
83,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4ffe34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919,PodSandboxId:325368b4e3b1a494eb13c5da624041bd17571bd421621f004f13602791fd3656,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472216670177822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf4da22a5727a780be32a5a7e7c4cdb,},Annotations:map[string]string{io.kuber
netes.container.hash: 9942d3a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a,PodSandboxId:a85e18e661f9441d351d5e36f2d09921a0be38e0bfd39009eb43cc0d8e7795b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1720472216695156121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381e23949c09eb6afe9825084993c3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4,PodSandboxId:c2c699d466b8db7053c9f17f7121b9f0e8525df66a21e105d8ccf229ced8690f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472216673692656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d55ef8ee96afe42a43026500a04e191,},Annotations:map[string]string{io.kubernetes.container.hash: d3bcb
c66,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06,PodSandboxId:c770e062d6dfe7ed846741a9b5bfd2cc5a9155cafa9f29146e2a409f7a8e4e14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472216653216479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-028021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36324e1aa77d8550081ad04dbe675433,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cdd41c6-3b3d-4356-966e-5c369902ca9b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fef16ca13964       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   62fbc1cf8e9ce       storage-provisioner
	baec44202be05       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   b77db01fb1c53       busybox
	d36b82d801f16       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   d9e968743a977       coredns-7db6d8ff4d-bb6cr
	abef906794957       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      19 minutes ago      Running             kube-proxy                1                   65aaa2f6076bf       kube-proxy-6p6l6
	a08f999b554b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   62fbc1cf8e9ce       storage-provisioner
	7c6733c9e5040       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      20 minutes ago      Running             kube-scheduler            1                   a85e18e661f94       kube-scheduler-no-preload-028021
	32bb552a97107       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      20 minutes ago      Running             kube-apiserver            1                   c2c699d466b8d       kube-apiserver-no-preload-028021
	3c78c8f11d8c3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   325368b4e3b1a       etcd-no-preload-028021
	2e901eb02d631       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      20 minutes ago      Running             kube-controller-manager   1                   c770e062d6dfe       kube-controller-manager-no-preload-028021
	
	
	==> coredns [d36b82d801f16c12767baa970b2e14a6f6d14c175a591fef0e4d9bb47a332a46] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53802 - 13940 "HINFO IN 4359606603896240805.7306614040164904022. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012443562s
	
	
	==> describe nodes <==
	Name:               no-preload-028021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-028021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=no-preload-028021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T20_47_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 20:47:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-028021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 21:16:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 21:12:50 +0000   Mon, 08 Jul 2024 20:47:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 21:12:50 +0000   Mon, 08 Jul 2024 20:47:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 21:12:50 +0000   Mon, 08 Jul 2024 20:47:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 21:12:50 +0000   Mon, 08 Jul 2024 20:57:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    no-preload-028021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b7b92ffb8b7447e9dbe49719c6af7c0
	  System UUID:                2b7b92ff-b8b7-447e-9dbe-49719c6af7c0
	  Boot ID:                    88f2572a-61d3-4bee-b6a2-51cd06d2f771
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-bb6cr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-028021                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-no-preload-028021             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-no-preload-028021    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-6p6l6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-028021             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-4kpfm              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node no-preload-028021 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node no-preload-028021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node no-preload-028021 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node no-preload-028021 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-028021 event: Registered Node no-preload-028021 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-028021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-028021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-028021 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-028021 event: Registered Node no-preload-028021 in Controller
	
	
	==> dmesg <==
	[Jul 8 20:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052845] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040103] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.819579] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.399465] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.625399] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.045115] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.068281] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079915] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.201510] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.140397] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.320915] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[ +16.982326] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.060419] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.120370] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[Jul 8 20:57] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.981451] systemd-fstab-generator[1979]: Ignoring "noauto" option for root device
	[  +1.682382] kauditd_printk_skb: 56 callbacks suppressed
	[  +7.538657] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [3c78c8f11d8c34f8f1ad5303ce916806ac5ca514c1a8a3aabf132f115e9b4919] <==
	{"level":"info","ts":"2024-07-08T20:56:57.186013Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.108:2380"}
	{"level":"info","ts":"2024-07-08T20:56:57.186041Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.108:2380"}
	{"level":"info","ts":"2024-07-08T20:56:58.760115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T20:56:58.760229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T20:56:58.760297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 received MsgPreVoteResp from 3b067627ba430497 at term 2"}
	{"level":"info","ts":"2024-07-08T20:56:58.760334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 became candidate at term 3"}
	{"level":"info","ts":"2024-07-08T20:56:58.760416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 received MsgVoteResp from 3b067627ba430497 at term 3"}
	{"level":"info","ts":"2024-07-08T20:56:58.760454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b067627ba430497 became leader at term 3"}
	{"level":"info","ts":"2024-07-08T20:56:58.760494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3b067627ba430497 elected leader 3b067627ba430497 at term 3"}
	{"level":"info","ts":"2024-07-08T20:56:58.772433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:56:58.773875Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3b067627ba430497","local-member-attributes":"{Name:no-preload-028021 ClientURLs:[https://192.168.39.108:2379]}","request-path":"/0/members/3b067627ba430497/attributes","cluster-id":"ad19b3444912fc40","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T20:56:58.774036Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:56:58.774203Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T20:56:58.774232Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T20:56:58.775915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T20:56:58.777625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.108:2379"}
	{"level":"info","ts":"2024-07-08T21:06:58.808606Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":800}
	{"level":"info","ts":"2024-07-08T21:06:58.820807Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":800,"took":"11.815513ms","hash":2016596171,"current-db-size-bytes":2535424,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2535424,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-07-08T21:06:58.820877Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2016596171,"revision":800,"compact-revision":-1}
	{"level":"info","ts":"2024-07-08T21:11:58.815934Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1043}
	{"level":"info","ts":"2024-07-08T21:11:58.821267Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1043,"took":"4.905007ms","hash":775999542,"current-db-size-bytes":2535424,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-08T21:11:58.821367Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":775999542,"revision":1043,"compact-revision":800}
	{"level":"info","ts":"2024-07-08T21:16:58.827988Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1287}
	{"level":"info","ts":"2024-07-08T21:16:58.83305Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1287,"took":"4.366784ms","hash":1557328713,"current-db-size-bytes":2535424,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-08T21:16:58.833153Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1557328713,"revision":1287,"compact-revision":1043}
	
	
	==> kernel <==
	 21:17:00 up 20 min,  0 users,  load average: 0.13, 0.18, 0.12
	Linux no-preload-028021 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [32bb552a9710794ad35d9b5b224ba732cdaa0cb40f3776ee6e61a7d9f1ed87b4] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0708 21:12:01.236356       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:12:01.236727       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:12:01.236813       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:12:01.236538       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:12:01.237001       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:12:01.238244       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:13:01.237058       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:13:01.237451       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:13:01.237510       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:13:01.239432       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:13:01.239517       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:13:01.239527       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:15:01.238436       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:15:01.238846       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:15:01.238883       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:15:01.240137       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:15:01.240276       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:15:01.240320       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:17:00.242783       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:17:00.242923       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	
	
	==> kube-controller-manager [2e901eb02d631e4f9b9596a18997c6c90a78b5347a1ede61f0fa6acf81267c06] <==
	I0708 21:11:15.326946       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:11:44.821775       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:11:45.335913       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:12:14.829485       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:12:15.345143       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:12:44.835439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:12:45.354070       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:13:14.841880       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:13:15.365527       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0708 21:13:18.041696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="316.093µs"
	I0708 21:13:33.034524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="49.682µs"
	E0708 21:13:44.847072       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:13:45.373762       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:14:14.853141       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:14:15.382751       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:14:44.858057       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:14:45.392369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:15:14.864222       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:15:15.402059       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:15:44.869772       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:15:45.412386       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:16:14.875873       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:16:15.421900       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:16:44.881718       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:16:45.432821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [abef9067949570bf6d51f33bcd530b938c83d0b6b860e75699ec8b5db1b61d0b] <==
	I0708 20:57:01.630925       1 server_linux.go:69] "Using iptables proxy"
	I0708 20:57:01.644319       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.108"]
	I0708 20:57:01.683651       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 20:57:01.683703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 20:57:01.683721       1 server_linux.go:165] "Using iptables Proxier"
	I0708 20:57:01.686657       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 20:57:01.686891       1 server.go:872] "Version info" version="v1.30.2"
	I0708 20:57:01.686922       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:57:01.688220       1 config.go:192] "Starting service config controller"
	I0708 20:57:01.688251       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 20:57:01.688276       1 config.go:101] "Starting endpoint slice config controller"
	I0708 20:57:01.688280       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 20:57:01.688858       1 config.go:319] "Starting node config controller"
	I0708 20:57:01.688893       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 20:57:01.789063       1 shared_informer.go:320] Caches are synced for node config
	I0708 20:57:01.789100       1 shared_informer.go:320] Caches are synced for service config
	I0708 20:57:01.789114       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7c6733c9e504059b1a9ad23c5c7e01ec05ee81ce18395b23135eaa1bfcd2a26a] <==
	I0708 20:56:58.208533       1 serving.go:380] Generated self-signed cert in-memory
	W0708 20:57:00.121622       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 20:57:00.121819       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 20:57:00.121908       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 20:57:00.121934       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 20:57:00.213394       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 20:57:00.213603       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:57:00.225294       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 20:57:00.225406       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 20:57:00.225724       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 20:57:00.225825       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 20:57:00.325728       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 21:14:56 no-preload-028021 kubelet[1359]: E0708 21:14:56.038841    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:14:56 no-preload-028021 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:14:56 no-preload-028021 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:14:56 no-preload-028021 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:14:56 no-preload-028021 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:14:57 no-preload-028021 kubelet[1359]: E0708 21:14:57.021619    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:15:11 no-preload-028021 kubelet[1359]: E0708 21:15:11.019892    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:15:26 no-preload-028021 kubelet[1359]: E0708 21:15:26.021766    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:15:37 no-preload-028021 kubelet[1359]: E0708 21:15:37.019238    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:15:52 no-preload-028021 kubelet[1359]: E0708 21:15:52.019954    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:15:56 no-preload-028021 kubelet[1359]: E0708 21:15:56.035337    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:15:56 no-preload-028021 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:15:56 no-preload-028021 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:15:56 no-preload-028021 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:15:56 no-preload-028021 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:16:05 no-preload-028021 kubelet[1359]: E0708 21:16:05.020465    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:16:16 no-preload-028021 kubelet[1359]: E0708 21:16:16.021678    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:16:28 no-preload-028021 kubelet[1359]: E0708 21:16:28.019645    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:16:40 no-preload-028021 kubelet[1359]: E0708 21:16:40.020935    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:16:51 no-preload-028021 kubelet[1359]: E0708 21:16:51.020167    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4kpfm" podUID="c37f4622-163f-48bf-9bb4-5a20b88187ad"
	Jul 08 21:16:56 no-preload-028021 kubelet[1359]: E0708 21:16:56.038434    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:16:56 no-preload-028021 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:16:56 no-preload-028021 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:16:56 no-preload-028021 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:16:56 no-preload-028021 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [7fef16ca139641633962c0a120097ea9d67c4078fba80ac41d7bab672a31145b] <==
	I0708 20:57:32.337970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 20:57:32.348471       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 20:57:32.348649       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 20:57:49.749251       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 20:57:49.749425       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-028021_5f3b64ba-d14a-4614-82b4-eac6452feda0!
	I0708 20:57:49.750357       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac2159a8-00d0-402d-b75e-f4a46bc30629", APIVersion:"v1", ResourceVersion:"584", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-028021_5f3b64ba-d14a-4614-82b4-eac6452feda0 became leader
	I0708 20:57:49.849765       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-028021_5f3b64ba-d14a-4614-82b4-eac6452feda0!
	
	
	==> storage-provisioner [a08f999b554b95eef559d3145224af52cb9c2606c04d4ca42bf5e355526ac69a] <==
	I0708 20:57:01.582028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0708 20:57:31.586346       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-028021 -n no-preload-028021
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-028021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-4kpfm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-028021 describe pod metrics-server-569cc877fc-4kpfm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-028021 describe pod metrics-server-569cc877fc-4kpfm: exit status 1 (99.157968ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-4kpfm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-028021 describe pod metrics-server-569cc877fc-4kpfm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (390.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-08 21:19:42.228597896 +0000 UTC m=+6642.141475700
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-071971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-071971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (81.849468ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-071971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-071971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-071971 logs -n 25: (1.434306644s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273 | jenkins | v1.33.1 | 08 Jul 24 21:12 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273 | jenkins | v1.33.1 | 08 Jul 24 21:16 UTC | 08 Jul 24 21:16 UTC |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273 | jenkins | v1.33.1 | 08 Jul 24 21:16 UTC | 08 Jul 24 21:17 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p no-preload-028021                                   | no-preload-028021         | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:17 UTC |
	| start   | -p stopped-upgrade-957981                              | minikube                  | jenkins | v1.26.0 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:17 UTC |
	|         | --memory=2200 --vm-driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p embed-certs-239931                                  | embed-certs-239931        | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:17 UTC |
	| start   | -p newest-cni-292907 --memory=2200 --alsologtostderr   | newest-cni-292907         | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                           |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                           |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                           |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                           |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273 | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273 | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:18 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-957981 stop                            | minikube                  | jenkins | v1.26.0 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:17 UTC |
	| start   | -p stopped-upgrade-957981                              | stopped-upgrade-957981    | jenkins | v1.33.1 | 08 Jul 24 21:17 UTC | 08 Jul 24 21:18 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-467273                           | kubernetes-upgrade-467273 | jenkins | v1.33.1 | 08 Jul 24 21:18 UTC | 08 Jul 24 21:18 UTC |
	| start   | -p auto-088829 --memory=3072                           | auto-088829               | jenkins | v1.33.1 | 08 Jul 24 21:18 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-292907             | newest-cni-292907         | jenkins | v1.33.1 | 08 Jul 24 21:18 UTC | 08 Jul 24 21:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p newest-cni-292907                                   | newest-cni-292907         | jenkins | v1.33.1 | 08 Jul 24 21:18 UTC | 08 Jul 24 21:18 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-292907                  | newest-cni-292907         | jenkins | v1.33.1 | 08 Jul 24 21:18 UTC | 08 Jul 24 21:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p newest-cni-292907 --memory=2200 --alsologtostderr   | newest-cni-292907         | jenkins | v1.33.1 | 08 Jul 24 21:18 UTC | 08 Jul 24 21:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                           |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                           |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                           |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                           |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-957981                              | stopped-upgrade-957981    | jenkins | v1.33.1 | 08 Jul 24 21:18 UTC | 08 Jul 24 21:18 UTC |
	| start   | -p kindnet-088829                                      | kindnet-088829            | jenkins | v1.33.1 | 08 Jul 24 21:18 UTC |                     |
	|         | --memory=3072                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| image   | newest-cni-292907 image list                           | newest-cni-292907         | jenkins | v1.33.1 | 08 Jul 24 21:19 UTC | 08 Jul 24 21:19 UTC |
	|         | --format=json                                          |                           |         |         |                     |                     |
	| pause   | -p newest-cni-292907                                   | newest-cni-292907         | jenkins | v1.33.1 | 08 Jul 24 21:19 UTC | 08 Jul 24 21:19 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| unpause | -p newest-cni-292907                                   | newest-cni-292907         | jenkins | v1.33.1 | 08 Jul 24 21:19 UTC | 08 Jul 24 21:19 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| delete  | -p newest-cni-292907                                   | newest-cni-292907         | jenkins | v1.33.1 | 08 Jul 24 21:19 UTC | 08 Jul 24 21:19 UTC |
	| delete  | -p newest-cni-292907                                   | newest-cni-292907         | jenkins | v1.33.1 | 08 Jul 24 21:19 UTC | 08 Jul 24 21:19 UTC |
	| start   | -p calico-088829 --memory=3072                         | calico-088829             | jenkins | v1.33.1 | 08 Jul 24 21:19 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                           |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 21:19:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 21:19:30.987511   69506 out.go:291] Setting OutFile to fd 1 ...
	I0708 21:19:30.987626   69506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 21:19:30.987634   69506 out.go:304] Setting ErrFile to fd 2...
	I0708 21:19:30.987639   69506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 21:19:30.987869   69506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 21:19:30.988455   69506 out.go:298] Setting JSON to false
	I0708 21:19:30.989537   69506 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7320,"bootTime":1720466251,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 21:19:30.989610   69506 start.go:139] virtualization: kvm guest
	I0708 21:19:30.991996   69506 out.go:177] * [calico-088829] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 21:19:30.993619   69506 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 21:19:30.993671   69506 notify.go:220] Checking for updates...
	I0708 21:19:30.996420   69506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 21:19:30.998066   69506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 21:19:30.999680   69506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:19:31.001351   69506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 21:19:31.003014   69506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 21:19:31.004937   69506 config.go:182] Loaded profile config "auto-088829": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:19:31.005058   69506 config.go:182] Loaded profile config "default-k8s-diff-port-071971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:19:31.005160   69506 config.go:182] Loaded profile config "kindnet-088829": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:19:31.005259   69506 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 21:19:31.047866   69506 out.go:177] * Using the kvm2 driver based on user configuration
	I0708 21:19:31.049225   69506 start.go:297] selected driver: kvm2
	I0708 21:19:31.049244   69506 start.go:901] validating driver "kvm2" against <nil>
	I0708 21:19:31.049260   69506 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 21:19:31.050045   69506 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 21:19:31.050153   69506 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 21:19:31.067003   69506 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 21:19:31.067052   69506 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 21:19:31.067322   69506 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 21:19:31.067396   69506 cni.go:84] Creating CNI manager for "calico"
	I0708 21:19:31.067414   69506 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0708 21:19:31.067486   69506 start.go:340] cluster config:
	{Name:calico-088829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:calico-088829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 21:19:31.067599   69506 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 21:19:31.069404   69506 out.go:177] * Starting "calico-088829" primary control-plane node in "calico-088829" cluster
	I0708 21:19:31.070691   69506 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 21:19:31.070743   69506 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0708 21:19:31.070753   69506 cache.go:56] Caching tarball of preloaded images
	I0708 21:19:31.070829   69506 preload.go:173] Found /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0708 21:19:31.070843   69506 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0708 21:19:31.070940   69506 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/calico-088829/config.json ...
	I0708 21:19:31.070961   69506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/calico-088829/config.json: {Name:mk22d89365bbcc06075f9aec5a4edfc084ae0f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:19:31.071138   69506 start.go:360] acquireMachinesLock for calico-088829: {Name:mk02da8a2ea9557af4e83be5d1f75b14760e19ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 21:19:31.605126   69506 start.go:364] duration metric: took 533.949897ms to acquireMachinesLock for "calico-088829"
	I0708 21:19:31.605203   69506 start.go:93] Provisioning new machine with config: &{Name:calico-088829 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:calico-088829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0708 21:19:31.605303   69506 start.go:125] createHost starting for "" (driver="kvm2")
	I0708 21:19:29.572823   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:29.573549   68616 main.go:141] libmachine: (kindnet-088829) Found IP for machine: 192.168.39.194
	I0708 21:19:29.573586   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has current primary IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:29.573596   68616 main.go:141] libmachine: (kindnet-088829) Reserving static IP address...
	I0708 21:19:29.573914   68616 main.go:141] libmachine: (kindnet-088829) DBG | unable to find host DHCP lease matching {name: "kindnet-088829", mac: "52:54:00:8f:e8:3f", ip: "192.168.39.194"} in network mk-kindnet-088829
	I0708 21:19:29.668047   68616 main.go:141] libmachine: (kindnet-088829) DBG | Getting to WaitForSSH function...
	I0708 21:19:29.668071   68616 main.go:141] libmachine: (kindnet-088829) Reserved static IP address: 192.168.39.194
	I0708 21:19:29.668084   68616 main.go:141] libmachine: (kindnet-088829) Waiting for SSH to be available...
	I0708 21:19:29.671973   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:29.672632   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:29.672673   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:29.672809   68616 main.go:141] libmachine: (kindnet-088829) DBG | Using SSH client type: external
	I0708 21:19:29.672833   68616 main.go:141] libmachine: (kindnet-088829) DBG | Using SSH private key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/kindnet-088829/id_rsa (-rw-------)
	I0708 21:19:29.672861   68616 main.go:141] libmachine: (kindnet-088829) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19195-5988/.minikube/machines/kindnet-088829/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0708 21:19:29.672878   68616 main.go:141] libmachine: (kindnet-088829) DBG | About to run SSH command:
	I0708 21:19:29.672889   68616 main.go:141] libmachine: (kindnet-088829) DBG | exit 0
	I0708 21:19:29.809659   68616 main.go:141] libmachine: (kindnet-088829) DBG | SSH cmd err, output: <nil>: 
	I0708 21:19:29.810119   68616 main.go:141] libmachine: (kindnet-088829) KVM machine creation complete!
	I0708 21:19:29.810366   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetConfigRaw
	I0708 21:19:29.810916   68616 main.go:141] libmachine: (kindnet-088829) Calling .DriverName
	I0708 21:19:29.811202   68616 main.go:141] libmachine: (kindnet-088829) Calling .DriverName
	I0708 21:19:29.811437   68616 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0708 21:19:29.811481   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetState
	I0708 21:19:29.813007   68616 main.go:141] libmachine: Detecting operating system of created instance...
	I0708 21:19:29.813026   68616 main.go:141] libmachine: Waiting for SSH to be available...
	I0708 21:19:29.813031   68616 main.go:141] libmachine: Getting to WaitForSSH function...
	I0708 21:19:29.813038   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:29.815956   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:29.816439   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:29.816467   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:29.816658   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHPort
	I0708 21:19:29.816846   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:29.817045   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:29.817205   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHUsername
	I0708 21:19:29.817379   68616 main.go:141] libmachine: Using SSH client type: native
	I0708 21:19:29.817652   68616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0708 21:19:29.817672   68616 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0708 21:19:29.931350   68616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 21:19:29.931378   68616 main.go:141] libmachine: Detecting the provisioner...
	I0708 21:19:29.931394   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:29.934438   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:29.934795   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:29.934824   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:29.935091   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHPort
	I0708 21:19:29.935282   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:29.935399   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:29.935559   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHUsername
	I0708 21:19:29.935791   68616 main.go:141] libmachine: Using SSH client type: native
	I0708 21:19:29.936017   68616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0708 21:19:29.936037   68616 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0708 21:19:30.057324   68616 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0708 21:19:30.057417   68616 main.go:141] libmachine: found compatible host: buildroot
	I0708 21:19:30.057431   68616 main.go:141] libmachine: Provisioning with buildroot...
	I0708 21:19:30.057448   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetMachineName
	I0708 21:19:30.057683   68616 buildroot.go:166] provisioning hostname "kindnet-088829"
	I0708 21:19:30.057717   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetMachineName
	I0708 21:19:30.057919   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:30.062802   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:30.176634   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:30.176671   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:30.176875   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHPort
	I0708 21:19:30.177095   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:30.177348   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:30.177523   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHUsername
	I0708 21:19:30.177832   68616 main.go:141] libmachine: Using SSH client type: native
	I0708 21:19:30.178074   68616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0708 21:19:30.178097   68616 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-088829 && echo "kindnet-088829" | sudo tee /etc/hostname
	I0708 21:19:30.313074   68616 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-088829
	
	I0708 21:19:30.313121   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:30.532568   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:30.532944   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:30.532978   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:30.533136   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHPort
	I0708 21:19:30.533380   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:30.533586   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:30.533743   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHUsername
	I0708 21:19:30.533916   68616 main.go:141] libmachine: Using SSH client type: native
	I0708 21:19:30.534136   68616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0708 21:19:30.534158   68616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-088829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-088829/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-088829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 21:19:30.666819   68616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 21:19:30.666882   68616 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19195-5988/.minikube CaCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19195-5988/.minikube}
	I0708 21:19:30.666935   68616 buildroot.go:174] setting up certificates
	I0708 21:19:30.666950   68616 provision.go:84] configureAuth start
	I0708 21:19:30.666967   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetMachineName
	I0708 21:19:30.667222   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetIP
	I0708 21:19:30.670225   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:30.670561   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:30.670590   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:30.670745   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:30.673358   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:30.673871   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:30.673905   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:30.674182   68616 provision.go:143] copyHostCerts
	I0708 21:19:30.674241   68616 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem, removing ...
	I0708 21:19:30.674250   68616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem
	I0708 21:19:30.674303   68616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/key.pem (1679 bytes)
	I0708 21:19:30.674410   68616 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem, removing ...
	I0708 21:19:30.674419   68616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem
	I0708 21:19:30.674439   68616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/ca.pem (1078 bytes)
	I0708 21:19:30.674501   68616 exec_runner.go:144] found /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem, removing ...
	I0708 21:19:30.674508   68616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem
	I0708 21:19:30.674524   68616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19195-5988/.minikube/cert.pem (1123 bytes)
	I0708 21:19:30.674580   68616 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem org=jenkins.kindnet-088829 san=[127.0.0.1 192.168.39.194 kindnet-088829 localhost minikube]
	I0708 21:19:30.840662   68616 provision.go:177] copyRemoteCerts
	I0708 21:19:30.840722   68616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 21:19:30.840750   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:30.843477   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:30.843839   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:30.843909   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:30.844106   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHPort
	I0708 21:19:30.844330   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:30.844530   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHUsername
	I0708 21:19:30.845082   68616 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kindnet-088829/id_rsa Username:docker}
	I0708 21:19:30.935120   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 21:19:30.965018   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0708 21:19:30.997237   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 21:19:31.033646   68616 provision.go:87] duration metric: took 366.679896ms to configureAuth
	I0708 21:19:31.033679   68616 buildroot.go:189] setting minikube options for container-runtime
	I0708 21:19:31.033876   68616 config.go:182] Loaded profile config "kindnet-088829": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 21:19:31.033964   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:31.037257   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.037879   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:31.037909   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.038169   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHPort
	I0708 21:19:31.038410   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:31.038625   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:31.038797   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHUsername
	I0708 21:19:31.039020   68616 main.go:141] libmachine: Using SSH client type: native
	I0708 21:19:31.039263   68616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0708 21:19:31.039286   68616 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0708 21:19:31.330413   68616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 21:19:31.330480   68616 main.go:141] libmachine: Checking connection to Docker...
	I0708 21:19:31.330492   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetURL
	I0708 21:19:31.332051   68616 main.go:141] libmachine: (kindnet-088829) DBG | Using libvirt version 6000000
	I0708 21:19:31.334694   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.335192   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:31.335225   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.335437   68616 main.go:141] libmachine: Docker is up and running!
	I0708 21:19:31.335474   68616 main.go:141] libmachine: Reticulating splines...
	I0708 21:19:31.335483   68616 client.go:171] duration metric: took 24.335529705s to LocalClient.Create
	I0708 21:19:31.335512   68616 start.go:167] duration metric: took 24.335596597s to libmachine.API.Create "kindnet-088829"
	I0708 21:19:31.335525   68616 start.go:293] postStartSetup for "kindnet-088829" (driver="kvm2")
	I0708 21:19:31.335538   68616 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 21:19:31.335561   68616 main.go:141] libmachine: (kindnet-088829) Calling .DriverName
	I0708 21:19:31.335813   68616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 21:19:31.335838   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:31.338677   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.339143   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:31.339174   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.339298   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHPort
	I0708 21:19:31.339520   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:31.339710   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHUsername
	I0708 21:19:31.339884   68616 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kindnet-088829/id_rsa Username:docker}
	I0708 21:19:31.432702   68616 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 21:19:31.438628   68616 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 21:19:31.438660   68616 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/addons for local assets ...
	I0708 21:19:31.438738   68616 filesync.go:126] Scanning /home/jenkins/minikube-integration/19195-5988/.minikube/files for local assets ...
	I0708 21:19:31.438808   68616 filesync.go:149] local asset: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem -> 131412.pem in /etc/ssl/certs
	I0708 21:19:31.438930   68616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 21:19:31.450199   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /etc/ssl/certs/131412.pem (1708 bytes)
	I0708 21:19:31.482222   68616 start.go:296] duration metric: took 146.684528ms for postStartSetup
	I0708 21:19:31.482270   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetConfigRaw
	I0708 21:19:31.482904   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetIP
	I0708 21:19:31.485629   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.485983   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:31.486013   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.486329   68616 profile.go:143] Saving config to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/config.json ...
	I0708 21:19:31.486592   68616 start.go:128] duration metric: took 24.513380498s to createHost
	I0708 21:19:31.486626   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:31.489225   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.489644   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:31.489668   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.489869   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHPort
	I0708 21:19:31.490057   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:31.490218   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:31.490372   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHUsername
	I0708 21:19:31.490568   68616 main.go:141] libmachine: Using SSH client type: native
	I0708 21:19:31.490777   68616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0708 21:19:31.490804   68616 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 21:19:31.604932   68616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720473571.577844958
	
	I0708 21:19:31.604966   68616 fix.go:216] guest clock: 1720473571.577844958
	I0708 21:19:31.604977   68616 fix.go:229] Guest: 2024-07-08 21:19:31.577844958 +0000 UTC Remote: 2024-07-08 21:19:31.486610829 +0000 UTC m=+44.593937850 (delta=91.234129ms)
	I0708 21:19:31.605014   68616 fix.go:200] guest clock delta is within tolerance: 91.234129ms
	I0708 21:19:31.605026   68616 start.go:83] releasing machines lock for "kindnet-088829", held for 24.631995981s
	I0708 21:19:31.605059   68616 main.go:141] libmachine: (kindnet-088829) Calling .DriverName
	I0708 21:19:31.605371   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetIP
	I0708 21:19:31.608481   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.608860   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:31.608886   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.609109   68616 main.go:141] libmachine: (kindnet-088829) Calling .DriverName
	I0708 21:19:31.609627   68616 main.go:141] libmachine: (kindnet-088829) Calling .DriverName
	I0708 21:19:31.609807   68616 main.go:141] libmachine: (kindnet-088829) Calling .DriverName
	I0708 21:19:31.609866   68616 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 21:19:31.609908   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:31.610042   68616 ssh_runner.go:195] Run: cat /version.json
	I0708 21:19:31.610067   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHHostname
	I0708 21:19:31.612725   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.612977   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.613127   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:31.613158   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.613300   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHPort
	I0708 21:19:31.613383   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:31.613421   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:31.613490   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:31.613591   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHPort
	I0708 21:19:31.613679   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHUsername
	I0708 21:19:31.613741   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHKeyPath
	I0708 21:19:31.613837   68616 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kindnet-088829/id_rsa Username:docker}
	I0708 21:19:31.613860   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetSSHUsername
	I0708 21:19:31.613993   68616 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/kindnet-088829/id_rsa Username:docker}
	I0708 21:19:31.700945   68616 ssh_runner.go:195] Run: systemctl --version
	I0708 21:19:31.730186   68616 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0708 21:19:31.897263   68616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 21:19:31.905983   68616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 21:19:31.906062   68616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 21:19:31.924964   68616 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 21:19:31.925004   68616 start.go:494] detecting cgroup driver to use...
	I0708 21:19:31.925079   68616 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 21:19:31.946908   68616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 21:19:31.963567   68616 docker.go:217] disabling cri-docker service (if available) ...
	I0708 21:19:31.963637   68616 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0708 21:19:31.980034   68616 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0708 21:19:31.995886   68616 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0708 21:19:32.120638   68616 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0708 21:19:32.288127   68616 docker.go:233] disabling docker service ...
	I0708 21:19:32.288191   68616 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0708 21:19:32.305327   68616 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0708 21:19:32.321835   68616 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0708 21:19:32.476362   68616 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0708 21:19:32.610714   68616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0708 21:19:32.626457   68616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 21:19:32.647135   68616 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0708 21:19:32.647203   68616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:19:32.660197   68616 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0708 21:19:32.660262   68616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:19:32.676281   68616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:19:32.692378   68616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:19:32.705761   68616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 21:19:32.719855   68616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:19:32.733619   68616 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:19:32.753399   68616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0708 21:19:32.765701   68616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 21:19:32.779432   68616 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0708 21:19:32.779507   68616 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0708 21:19:32.797301   68616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 21:19:32.811538   68616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:19:32.974528   68616 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0708 21:19:33.145634   68616 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 21:19:33.145720   68616 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0708 21:19:33.151744   68616 start.go:562] Will wait 60s for crictl version
	I0708 21:19:33.151817   68616 ssh_runner.go:195] Run: which crictl
	I0708 21:19:33.156122   68616 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 21:19:33.200369   68616 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0708 21:19:33.200462   68616 ssh_runner.go:195] Run: crio --version
	I0708 21:19:33.229286   68616 ssh_runner.go:195] Run: crio --version
	I0708 21:19:33.262886   68616 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0708 21:19:28.990362   67987 pod_ready.go:102] pod "coredns-7db6d8ff4d-4g68k" in "kube-system" namespace has status "Ready":"False"
	I0708 21:19:30.993879   67987 pod_ready.go:102] pod "coredns-7db6d8ff4d-4g68k" in "kube-system" namespace has status "Ready":"False"
	I0708 21:19:32.994497   67987 pod_ready.go:102] pod "coredns-7db6d8ff4d-4g68k" in "kube-system" namespace has status "Ready":"False"
	I0708 21:19:31.607432   69506 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 21:19:31.607681   69506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 21:19:31.607737   69506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 21:19:31.625049   69506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I0708 21:19:31.625480   69506 main.go:141] libmachine: () Calling .GetVersion
	I0708 21:19:31.625968   69506 main.go:141] libmachine: Using API Version  1
	I0708 21:19:31.625991   69506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 21:19:31.626450   69506 main.go:141] libmachine: () Calling .GetMachineName
	I0708 21:19:31.626687   69506 main.go:141] libmachine: (calico-088829) Calling .GetMachineName
	I0708 21:19:31.626878   69506 main.go:141] libmachine: (calico-088829) Calling .DriverName
	I0708 21:19:31.627079   69506 start.go:159] libmachine.API.Create for "calico-088829" (driver="kvm2")
	I0708 21:19:31.627107   69506 client.go:168] LocalClient.Create starting
	I0708 21:19:31.627142   69506 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem
	I0708 21:19:31.627190   69506 main.go:141] libmachine: Decoding PEM data...
	I0708 21:19:31.627214   69506 main.go:141] libmachine: Parsing certificate...
	I0708 21:19:31.627279   69506 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem
	I0708 21:19:31.627306   69506 main.go:141] libmachine: Decoding PEM data...
	I0708 21:19:31.627326   69506 main.go:141] libmachine: Parsing certificate...
	I0708 21:19:31.627357   69506 main.go:141] libmachine: Running pre-create checks...
	I0708 21:19:31.627375   69506 main.go:141] libmachine: (calico-088829) Calling .PreCreateCheck
	I0708 21:19:31.627853   69506 main.go:141] libmachine: (calico-088829) Calling .GetConfigRaw
	I0708 21:19:31.628362   69506 main.go:141] libmachine: Creating machine...
	I0708 21:19:31.628380   69506 main.go:141] libmachine: (calico-088829) Calling .Create
	I0708 21:19:31.628559   69506 main.go:141] libmachine: (calico-088829) Creating KVM machine...
	I0708 21:19:31.629888   69506 main.go:141] libmachine: (calico-088829) DBG | found existing default KVM network
	I0708 21:19:31.631348   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:31.631175   69528 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:bd:f7:3b} reservation:<nil>}
	I0708 21:19:31.632358   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:31.632271   69528 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9f:f4:e7} reservation:<nil>}
	I0708 21:19:31.633504   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:31.633413   69528 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020be80}
	I0708 21:19:31.633552   69506 main.go:141] libmachine: (calico-088829) DBG | created network xml: 
	I0708 21:19:31.633567   69506 main.go:141] libmachine: (calico-088829) DBG | <network>
	I0708 21:19:31.633581   69506 main.go:141] libmachine: (calico-088829) DBG |   <name>mk-calico-088829</name>
	I0708 21:19:31.633589   69506 main.go:141] libmachine: (calico-088829) DBG |   <dns enable='no'/>
	I0708 21:19:31.633605   69506 main.go:141] libmachine: (calico-088829) DBG |   
	I0708 21:19:31.633617   69506 main.go:141] libmachine: (calico-088829) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0708 21:19:31.633628   69506 main.go:141] libmachine: (calico-088829) DBG |     <dhcp>
	I0708 21:19:31.633640   69506 main.go:141] libmachine: (calico-088829) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0708 21:19:31.633652   69506 main.go:141] libmachine: (calico-088829) DBG |     </dhcp>
	I0708 21:19:31.633658   69506 main.go:141] libmachine: (calico-088829) DBG |   </ip>
	I0708 21:19:31.633680   69506 main.go:141] libmachine: (calico-088829) DBG |   
	I0708 21:19:31.633701   69506 main.go:141] libmachine: (calico-088829) DBG | </network>
	I0708 21:19:31.633716   69506 main.go:141] libmachine: (calico-088829) DBG | 
	I0708 21:19:31.639441   69506 main.go:141] libmachine: (calico-088829) DBG | trying to create private KVM network mk-calico-088829 192.168.61.0/24...
	I0708 21:19:31.721661   69506 main.go:141] libmachine: (calico-088829) DBG | private KVM network mk-calico-088829 192.168.61.0/24 created
	I0708 21:19:31.721773   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:31.721645   69528 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:19:31.721828   69506 main.go:141] libmachine: (calico-088829) Setting up store path in /home/jenkins/minikube-integration/19195-5988/.minikube/machines/calico-088829 ...
	I0708 21:19:31.721858   69506 main.go:141] libmachine: (calico-088829) Building disk image from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 21:19:31.721882   69506 main.go:141] libmachine: (calico-088829) Downloading /home/jenkins/minikube-integration/19195-5988/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso...
	I0708 21:19:31.992986   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:31.992851   69528 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/calico-088829/id_rsa...
	I0708 21:19:32.205970   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:32.205775   69528 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/calico-088829/calico-088829.rawdisk...
	I0708 21:19:32.206030   69506 main.go:141] libmachine: (calico-088829) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/calico-088829 (perms=drwx------)
	I0708 21:19:32.206042   69506 main.go:141] libmachine: (calico-088829) DBG | Writing magic tar header
	I0708 21:19:32.206055   69506 main.go:141] libmachine: (calico-088829) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube/machines (perms=drwxr-xr-x)
	I0708 21:19:32.206082   69506 main.go:141] libmachine: (calico-088829) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988/.minikube (perms=drwxr-xr-x)
	I0708 21:19:32.206094   69506 main.go:141] libmachine: (calico-088829) Setting executable bit set on /home/jenkins/minikube-integration/19195-5988 (perms=drwxrwxr-x)
	I0708 21:19:32.206110   69506 main.go:141] libmachine: (calico-088829) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0708 21:19:32.206123   69506 main.go:141] libmachine: (calico-088829) DBG | Writing SSH key tar header
	I0708 21:19:32.206139   69506 main.go:141] libmachine: (calico-088829) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0708 21:19:32.206154   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:32.205886   69528 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19195-5988/.minikube/machines/calico-088829 ...
	I0708 21:19:32.206167   69506 main.go:141] libmachine: (calico-088829) Creating domain...
	I0708 21:19:32.206246   69506 main.go:141] libmachine: (calico-088829) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines/calico-088829
	I0708 21:19:32.206280   69506 main.go:141] libmachine: (calico-088829) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube/machines
	I0708 21:19:32.206293   69506 main.go:141] libmachine: (calico-088829) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 21:19:32.206313   69506 main.go:141] libmachine: (calico-088829) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19195-5988
	I0708 21:19:32.206325   69506 main.go:141] libmachine: (calico-088829) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0708 21:19:32.206334   69506 main.go:141] libmachine: (calico-088829) DBG | Checking permissions on dir: /home/jenkins
	I0708 21:19:32.206347   69506 main.go:141] libmachine: (calico-088829) DBG | Checking permissions on dir: /home
	I0708 21:19:32.206354   69506 main.go:141] libmachine: (calico-088829) DBG | Skipping /home - not owner
	I0708 21:19:32.207358   69506 main.go:141] libmachine: (calico-088829) define libvirt domain using xml: 
	I0708 21:19:32.207387   69506 main.go:141] libmachine: (calico-088829) <domain type='kvm'>
	I0708 21:19:32.207417   69506 main.go:141] libmachine: (calico-088829)   <name>calico-088829</name>
	I0708 21:19:32.207443   69506 main.go:141] libmachine: (calico-088829)   <memory unit='MiB'>3072</memory>
	I0708 21:19:32.207492   69506 main.go:141] libmachine: (calico-088829)   <vcpu>2</vcpu>
	I0708 21:19:32.207519   69506 main.go:141] libmachine: (calico-088829)   <features>
	I0708 21:19:32.207530   69506 main.go:141] libmachine: (calico-088829)     <acpi/>
	I0708 21:19:32.207552   69506 main.go:141] libmachine: (calico-088829)     <apic/>
	I0708 21:19:32.207573   69506 main.go:141] libmachine: (calico-088829)     <pae/>
	I0708 21:19:32.207589   69506 main.go:141] libmachine: (calico-088829)     
	I0708 21:19:32.207602   69506 main.go:141] libmachine: (calico-088829)   </features>
	I0708 21:19:32.207613   69506 main.go:141] libmachine: (calico-088829)   <cpu mode='host-passthrough'>
	I0708 21:19:32.207637   69506 main.go:141] libmachine: (calico-088829)   
	I0708 21:19:32.207650   69506 main.go:141] libmachine: (calico-088829)   </cpu>
	I0708 21:19:32.207661   69506 main.go:141] libmachine: (calico-088829)   <os>
	I0708 21:19:32.207671   69506 main.go:141] libmachine: (calico-088829)     <type>hvm</type>
	I0708 21:19:32.207680   69506 main.go:141] libmachine: (calico-088829)     <boot dev='cdrom'/>
	I0708 21:19:32.207694   69506 main.go:141] libmachine: (calico-088829)     <boot dev='hd'/>
	I0708 21:19:32.207706   69506 main.go:141] libmachine: (calico-088829)     <bootmenu enable='no'/>
	I0708 21:19:32.207716   69506 main.go:141] libmachine: (calico-088829)   </os>
	I0708 21:19:32.207725   69506 main.go:141] libmachine: (calico-088829)   <devices>
	I0708 21:19:32.207735   69506 main.go:141] libmachine: (calico-088829)     <disk type='file' device='cdrom'>
	I0708 21:19:32.207751   69506 main.go:141] libmachine: (calico-088829)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/calico-088829/boot2docker.iso'/>
	I0708 21:19:32.207764   69506 main.go:141] libmachine: (calico-088829)       <target dev='hdc' bus='scsi'/>
	I0708 21:19:32.207775   69506 main.go:141] libmachine: (calico-088829)       <readonly/>
	I0708 21:19:32.207783   69506 main.go:141] libmachine: (calico-088829)     </disk>
	I0708 21:19:32.207795   69506 main.go:141] libmachine: (calico-088829)     <disk type='file' device='disk'>
	I0708 21:19:32.207810   69506 main.go:141] libmachine: (calico-088829)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0708 21:19:32.207823   69506 main.go:141] libmachine: (calico-088829)       <source file='/home/jenkins/minikube-integration/19195-5988/.minikube/machines/calico-088829/calico-088829.rawdisk'/>
	I0708 21:19:32.207848   69506 main.go:141] libmachine: (calico-088829)       <target dev='hda' bus='virtio'/>
	I0708 21:19:32.207874   69506 main.go:141] libmachine: (calico-088829)     </disk>
	I0708 21:19:32.207884   69506 main.go:141] libmachine: (calico-088829)     <interface type='network'>
	I0708 21:19:32.207903   69506 main.go:141] libmachine: (calico-088829)       <source network='mk-calico-088829'/>
	I0708 21:19:32.207917   69506 main.go:141] libmachine: (calico-088829)       <model type='virtio'/>
	I0708 21:19:32.207931   69506 main.go:141] libmachine: (calico-088829)     </interface>
	I0708 21:19:32.207950   69506 main.go:141] libmachine: (calico-088829)     <interface type='network'>
	I0708 21:19:32.207988   69506 main.go:141] libmachine: (calico-088829)       <source network='default'/>
	I0708 21:19:32.208017   69506 main.go:141] libmachine: (calico-088829)       <model type='virtio'/>
	I0708 21:19:32.208038   69506 main.go:141] libmachine: (calico-088829)     </interface>
	I0708 21:19:32.208053   69506 main.go:141] libmachine: (calico-088829)     <serial type='pty'>
	I0708 21:19:32.208064   69506 main.go:141] libmachine: (calico-088829)       <target port='0'/>
	I0708 21:19:32.208077   69506 main.go:141] libmachine: (calico-088829)     </serial>
	I0708 21:19:32.208089   69506 main.go:141] libmachine: (calico-088829)     <console type='pty'>
	I0708 21:19:32.208102   69506 main.go:141] libmachine: (calico-088829)       <target type='serial' port='0'/>
	I0708 21:19:32.208114   69506 main.go:141] libmachine: (calico-088829)     </console>
	I0708 21:19:32.208125   69506 main.go:141] libmachine: (calico-088829)     <rng model='virtio'>
	I0708 21:19:32.208139   69506 main.go:141] libmachine: (calico-088829)       <backend model='random'>/dev/random</backend>
	I0708 21:19:32.208152   69506 main.go:141] libmachine: (calico-088829)     </rng>
	I0708 21:19:32.208159   69506 main.go:141] libmachine: (calico-088829)     
	I0708 21:19:32.208171   69506 main.go:141] libmachine: (calico-088829)     
	I0708 21:19:32.208180   69506 main.go:141] libmachine: (calico-088829)   </devices>
	I0708 21:19:32.208193   69506 main.go:141] libmachine: (calico-088829) </domain>
	I0708 21:19:32.208203   69506 main.go:141] libmachine: (calico-088829) 
	I0708 21:19:32.212403   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:10:8f:1f in network default
	I0708 21:19:32.213037   69506 main.go:141] libmachine: (calico-088829) Ensuring networks are active...
	I0708 21:19:32.213059   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:32.213662   69506 main.go:141] libmachine: (calico-088829) Ensuring network default is active
	I0708 21:19:32.213919   69506 main.go:141] libmachine: (calico-088829) Ensuring network mk-calico-088829 is active
	I0708 21:19:32.214427   69506 main.go:141] libmachine: (calico-088829) Getting domain xml...
	I0708 21:19:32.215132   69506 main.go:141] libmachine: (calico-088829) Creating domain...
	I0708 21:19:33.642614   69506 main.go:141] libmachine: (calico-088829) Waiting to get IP...
	I0708 21:19:33.643591   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:33.644214   69506 main.go:141] libmachine: (calico-088829) DBG | unable to find current IP address of domain calico-088829 in network mk-calico-088829
	I0708 21:19:33.644248   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:33.644123   69528 retry.go:31] will retry after 251.762007ms: waiting for machine to come up
	I0708 21:19:33.897851   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:33.898976   69506 main.go:141] libmachine: (calico-088829) DBG | unable to find current IP address of domain calico-088829 in network mk-calico-088829
	I0708 21:19:33.899040   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:33.898891   69528 retry.go:31] will retry after 388.489866ms: waiting for machine to come up
	I0708 21:19:34.289791   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:34.290719   69506 main.go:141] libmachine: (calico-088829) DBG | unable to find current IP address of domain calico-088829 in network mk-calico-088829
	I0708 21:19:34.290753   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:34.290656   69528 retry.go:31] will retry after 338.231293ms: waiting for machine to come up
	I0708 21:19:34.630178   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:34.631853   69506 main.go:141] libmachine: (calico-088829) DBG | unable to find current IP address of domain calico-088829 in network mk-calico-088829
	I0708 21:19:34.631879   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:34.631792   69528 retry.go:31] will retry after 546.274728ms: waiting for machine to come up
	I0708 21:19:35.179532   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:35.180147   69506 main.go:141] libmachine: (calico-088829) DBG | unable to find current IP address of domain calico-088829 in network mk-calico-088829
	I0708 21:19:35.180173   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:35.180097   69528 retry.go:31] will retry after 684.846546ms: waiting for machine to come up
	I0708 21:19:35.867136   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:35.867713   69506 main.go:141] libmachine: (calico-088829) DBG | unable to find current IP address of domain calico-088829 in network mk-calico-088829
	I0708 21:19:35.867748   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:35.867659   69528 retry.go:31] will retry after 630.639181ms: waiting for machine to come up
	I0708 21:19:33.264239   68616 main.go:141] libmachine: (kindnet-088829) Calling .GetIP
	I0708 21:19:33.269107   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:33.271256   68616 main.go:141] libmachine: (kindnet-088829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e8:3f", ip: ""} in network mk-kindnet-088829: {Iface:virbr4 ExpiryTime:2024-07-08 22:19:22 +0000 UTC Type:0 Mac:52:54:00:8f:e8:3f Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:kindnet-088829 Clientid:01:52:54:00:8f:e8:3f}
	I0708 21:19:33.271289   68616 main.go:141] libmachine: (kindnet-088829) DBG | domain kindnet-088829 has defined IP address 192.168.39.194 and MAC address 52:54:00:8f:e8:3f in network mk-kindnet-088829
	I0708 21:19:33.271645   68616 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0708 21:19:33.276891   68616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 21:19:33.290286   68616 kubeadm.go:877] updating cluster {Name:kindnet-088829 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:kindnet-088829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 21:19:33.290421   68616 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0708 21:19:33.290473   68616 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 21:19:33.328990   68616 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0708 21:19:33.329070   68616 ssh_runner.go:195] Run: which lz4
	I0708 21:19:33.333458   68616 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 21:19:33.338148   68616 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 21:19:33.338184   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0708 21:19:34.993188   68616 crio.go:462] duration metric: took 1.659773758s to copy over tarball
	I0708 21:19:34.993260   68616 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 21:19:35.494487   67987 pod_ready.go:102] pod "coredns-7db6d8ff4d-4g68k" in "kube-system" namespace has status "Ready":"False"
	I0708 21:19:37.989729   67987 pod_ready.go:102] pod "coredns-7db6d8ff4d-4g68k" in "kube-system" namespace has status "Ready":"False"
	I0708 21:19:37.536282   68616 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.542992216s)
	I0708 21:19:37.536313   68616 crio.go:469] duration metric: took 2.543099791s to extract the tarball
	I0708 21:19:37.536322   68616 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 21:19:37.582556   68616 ssh_runner.go:195] Run: sudo crictl images --output json
	I0708 21:19:37.626335   68616 crio.go:514] all images are preloaded for cri-o runtime.
	I0708 21:19:37.626363   68616 cache_images.go:84] Images are preloaded, skipping loading
	I0708 21:19:37.626373   68616 kubeadm.go:928] updating node { 192.168.39.194 8443 v1.30.2 crio true true} ...
	I0708 21:19:37.626503   68616 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-088829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:kindnet-088829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0708 21:19:37.626590   68616 ssh_runner.go:195] Run: crio config
	I0708 21:19:37.687249   68616 cni.go:84] Creating CNI manager for "kindnet"
	I0708 21:19:37.687273   68616 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 21:19:37.687297   68616 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-088829 NodeName:kindnet-088829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 21:19:37.687445   68616 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-088829"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 21:19:37.687528   68616 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 21:19:37.697872   68616 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 21:19:37.697943   68616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 21:19:37.707627   68616 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0708 21:19:37.728749   68616 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 21:19:37.749031   68616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0708 21:19:37.767834   68616 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0708 21:19:37.772217   68616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 21:19:37.786404   68616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 21:19:37.905360   68616 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 21:19:37.923077   68616 certs.go:68] Setting up /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829 for IP: 192.168.39.194
	I0708 21:19:37.923100   68616 certs.go:194] generating shared ca certs ...
	I0708 21:19:37.923128   68616 certs.go:226] acquiring lock for ca certs: {Name:mk2b44e1aac6acd08d7e818dc0b4a0d63e10f898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:19:37.923328   68616 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key
	I0708 21:19:37.923383   68616 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key
	I0708 21:19:37.923396   68616 certs.go:256] generating profile certs ...
	I0708 21:19:37.923480   68616 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/client.key
	I0708 21:19:37.923505   68616 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/client.crt with IP's: []
	I0708 21:19:38.080057   68616 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/client.crt ...
	I0708 21:19:38.080091   68616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/client.crt: {Name:mkf6d622b6192c65ec1305a3d9806d0b1f3cde80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:19:38.080300   68616 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/client.key ...
	I0708 21:19:38.080320   68616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/client.key: {Name:mk877d27f9faf57601b10f7de1cb5da16ba3efe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:19:38.080452   68616 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.key.eeda9285
	I0708 21:19:38.080475   68616 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.crt.eeda9285 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194]
	I0708 21:19:38.289959   68616 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.crt.eeda9285 ...
	I0708 21:19:38.289991   68616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.crt.eeda9285: {Name:mk792e3e028452a04f23ac5cc7a691922760739c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:19:38.290194   68616 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.key.eeda9285 ...
	I0708 21:19:38.290214   68616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.key.eeda9285: {Name:mk9808d3b8f8d714293be3336f86d5ca9ad980f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:19:38.290316   68616 certs.go:381] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.crt.eeda9285 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.crt
	I0708 21:19:38.290432   68616 certs.go:385] copying /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.key.eeda9285 -> /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.key
	I0708 21:19:38.290519   68616 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/proxy-client.key
	I0708 21:19:38.290539   68616 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/proxy-client.crt with IP's: []
	I0708 21:19:38.407910   68616 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/proxy-client.crt ...
	I0708 21:19:38.407937   68616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/proxy-client.crt: {Name:mk937a5cf0574e92faa19b7d8d4f555de153d07e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:19:38.408114   68616 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/proxy-client.key ...
	I0708 21:19:38.408132   68616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/proxy-client.key: {Name:mk5f59c4ae3dbf14cf363b48ad09dd088bf65c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 21:19:38.408349   68616 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem (1338 bytes)
	W0708 21:19:38.408387   68616 certs.go:480] ignoring /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141_empty.pem, impossibly tiny 0 bytes
	I0708 21:19:38.408396   68616 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 21:19:38.408419   68616 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/ca.pem (1078 bytes)
	I0708 21:19:38.408455   68616 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/cert.pem (1123 bytes)
	I0708 21:19:38.408478   68616 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/certs/key.pem (1679 bytes)
	I0708 21:19:38.408514   68616 certs.go:484] found cert: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem (1708 bytes)
	I0708 21:19:38.409081   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 21:19:38.439898   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 21:19:38.470553   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 21:19:38.498627   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 21:19:38.526654   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0708 21:19:38.554422   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 21:19:38.582829   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 21:19:38.612687   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/kindnet-088829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 21:19:38.677347   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/certs/13141.pem --> /usr/share/ca-certificates/13141.pem (1338 bytes)
	I0708 21:19:38.710414   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/ssl/certs/131412.pem --> /usr/share/ca-certificates/131412.pem (1708 bytes)
	I0708 21:19:38.744322   68616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19195-5988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 21:19:38.775320   68616 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 21:19:38.802493   68616 ssh_runner.go:195] Run: openssl version
	I0708 21:19:38.809065   68616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 21:19:38.821306   68616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:19:38.826398   68616 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:19:38.826493   68616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 21:19:38.832978   68616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 21:19:38.844819   68616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13141.pem && ln -fs /usr/share/ca-certificates/13141.pem /etc/ssl/certs/13141.pem"
	I0708 21:19:38.857222   68616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13141.pem
	I0708 21:19:38.862181   68616 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:41 /usr/share/ca-certificates/13141.pem
	I0708 21:19:38.862244   68616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13141.pem
	I0708 21:19:38.868613   68616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13141.pem /etc/ssl/certs/51391683.0"
	I0708 21:19:38.880994   68616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131412.pem && ln -fs /usr/share/ca-certificates/131412.pem /etc/ssl/certs/131412.pem"
	I0708 21:19:38.896751   68616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131412.pem
	I0708 21:19:38.903225   68616 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:41 /usr/share/ca-certificates/131412.pem
	I0708 21:19:38.903303   68616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131412.pem
	I0708 21:19:38.909735   68616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131412.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 21:19:38.922716   68616 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 21:19:38.927804   68616 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 21:19:38.927904   68616 kubeadm.go:391] StartCluster: {Name:kindnet-088829 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:kindnet-088829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 21:19:38.928025   68616 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 21:19:38.928107   68616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 21:19:38.969201   68616 cri.go:89] found id: ""
	I0708 21:19:38.969277   68616 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 21:19:38.979821   68616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 21:19:38.992371   68616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 21:19:39.002868   68616 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 21:19:39.002898   68616 kubeadm.go:156] found existing configuration files:
	
	I0708 21:19:39.002951   68616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 21:19:39.013004   68616 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 21:19:39.013089   68616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 21:19:39.023782   68616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 21:19:39.034434   68616 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 21:19:39.034520   68616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 21:19:39.048455   68616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 21:19:39.059284   68616 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 21:19:39.059353   68616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 21:19:39.070304   68616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 21:19:39.081714   68616 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 21:19:39.081781   68616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 21:19:39.092371   68616 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 21:19:39.156531   68616 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 21:19:39.156711   68616 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 21:19:39.310714   68616 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 21:19:39.310899   68616 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 21:19:39.311041   68616 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 21:19:39.587375   68616 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 21:19:36.499684   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:36.500196   69506 main.go:141] libmachine: (calico-088829) DBG | unable to find current IP address of domain calico-088829 in network mk-calico-088829
	I0708 21:19:36.500242   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:36.500156   69528 retry.go:31] will retry after 982.775408ms: waiting for machine to come up
	I0708 21:19:37.484775   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:37.485501   69506 main.go:141] libmachine: (calico-088829) DBG | unable to find current IP address of domain calico-088829 in network mk-calico-088829
	I0708 21:19:37.485537   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:37.485470   69528 retry.go:31] will retry after 1.139105244s: waiting for machine to come up
	I0708 21:19:38.625835   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:38.626420   69506 main.go:141] libmachine: (calico-088829) DBG | unable to find current IP address of domain calico-088829 in network mk-calico-088829
	I0708 21:19:38.626441   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:38.626381   69528 retry.go:31] will retry after 1.127813159s: waiting for machine to come up
	I0708 21:19:39.756776   69506 main.go:141] libmachine: (calico-088829) DBG | domain calico-088829 has defined MAC address 52:54:00:26:91:2c in network mk-calico-088829
	I0708 21:19:39.757356   69506 main.go:141] libmachine: (calico-088829) DBG | unable to find current IP address of domain calico-088829 in network mk-calico-088829
	I0708 21:19:39.757383   69506 main.go:141] libmachine: (calico-088829) DBG | I0708 21:19:39.757321   69528 retry.go:31] will retry after 1.90423723s: waiting for machine to come up
	I0708 21:19:39.622553   68616 out.go:204]   - Generating certificates and keys ...
	I0708 21:19:39.622685   68616 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 21:19:39.622782   68616 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 21:19:39.766533   68616 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 21:19:39.980848   68616 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 21:19:40.073665   68616 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 21:19:40.192095   68616 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 21:19:40.321436   68616 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 21:19:40.321583   68616 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-088829 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0708 21:19:40.541560   68616 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 21:19:40.541755   68616 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-088829 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0708 21:19:40.692651   68616 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 21:19:40.920193   68616 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 21:19:41.028818   68616 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 21:19:41.029106   68616 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 21:19:41.214843   68616 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 21:19:41.595037   68616 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 21:19:41.720479   68616 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 21:19:41.822973   68616 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 21:19:42.107357   68616 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 21:19:42.108075   68616 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 21:19:42.113275   68616 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Jul 08 21:19:42 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:42.997180650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473582997151459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8172452a-8449-41be-81b5-b6a0d8c3de69 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:19:42 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:42.998486897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca0993e8-9818-4a1b-9c54-316885a4af75 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:42 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:42.998721386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca0993e8-9818-4a1b-9c54-316885a4af75 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:42 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:42.998953663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2e3069d015518dcd5e4c0967245dd74359ccdd3a693e5b4e26b330a139e95ab9,PodSandboxId:6afaa0f9dfe4869e8cc4dd4b3b075fdeb333c5e34088f77329936236ede1710a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472495932351496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8msvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38c1e0eb-5eb4-4acb-a5ae-c72871884e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 67ba72c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4084256f6479f4d4d67c4cf0c6e045ed54a7e9d883968077655fa6a188e7e5a,PodSandboxId:424bb8d1df2945e4c7a6543ecea7af6889b52de644565ac54774a8466116fa83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472495480249719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 805a8fdb-ed9e-4f80-a2c9-7d8a0155b228,},Annotations:map[string]string{io.kubernetes.container.hash: 881740b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de092804020ec874ad903cba82425d744cade6acadd234fae7472c54a580e7b,PodSandboxId:0d491e8ede82b38f0c69cd28c624735670d471e8454bbba7ed0ebb55519e9f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472494400846679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hq7zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddb0f99d-a91d-4bb7-96e7-695b6101a601,},Annotations:map[string]string{io.kubernetes.container.hash: 5c1d43b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e3b0cd648694b3f58bf5d849690114c88e9bbf8bb427f3f7a291c723ea4ac,PodSandboxId:d5eb5df2c91fca807a98e2633a3323bc0632af36985b1a5ea834a384058c1ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1720472494099867717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2mdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d70ae2-ed86-49bd-8910-a12c5cd8091a,},Annotations:map[string]string{io.kubernetes.container.hash: 4395f9e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6924e8ced977682d418cea0d436ce49cf79ee382272cb973c8dce7ef6eed6b5,PodSandboxId:a70bc3eb6f4c04162a76fcf65ff5dce7b7a4359f108796f57dd38de4f85e5e9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172047247376840774
4,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd817aef551a1a373ed796646422588,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f55d96f0b61615e83effa00dfff2f7f1cb7042fa84dd01741ec99c489c1cb0b,PodSandboxId:9000c90118635dcdea0100dab133192632f107ee54d7a238d153e5b98fc2fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472473767173365,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df570c9bdb1120e2db1c21b23efdd45,},Annotations:map[string]string{io.kubernetes.container.hash: 1a36d12c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3647b50ce1b4e99d8a409635d93fb22ffbdad34501c3dcbf031498e75ffbab,PodSandboxId:32e80034e1af0e67a39de4df58fe89b2e58887fa59c554adb1298f70c9c2673f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472473712140892,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cecc367fdaa42e3448bb0470688d7b39,},Annotations:map[string]string{io.kubernetes.container.hash: 451cdd04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16947ba6fb46a98e68f1a9f8639e8ceb7d4ce698bbbdc562e43dfbfb921bc130,PodSandboxId:53fa2bbde8261450cb7eb5ad812de328c035611520be6db541d4abc3822737ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472473716706863,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 443e4b2ad13f1980b427a0563ef15fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca0993e8-9818-4a1b-9c54-316885a4af75 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.047058147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7f569c4-3691-4a44-b087-0826227f186c name=/runtime.v1.RuntimeService/Version
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.047184622Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7f569c4-3691-4a44-b087-0826227f186c name=/runtime.v1.RuntimeService/Version
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.049337008Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24e6c654-a633-4dd6-981b-6e843cf9a368 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.050772390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473583050475746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24e6c654-a633-4dd6-981b-6e843cf9a368 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.052276511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f15f83be-ca5f-4318-b3d3-c8363c373535 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.052367790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f15f83be-ca5f-4318-b3d3-c8363c373535 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.052708368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2e3069d015518dcd5e4c0967245dd74359ccdd3a693e5b4e26b330a139e95ab9,PodSandboxId:6afaa0f9dfe4869e8cc4dd4b3b075fdeb333c5e34088f77329936236ede1710a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472495932351496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8msvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38c1e0eb-5eb4-4acb-a5ae-c72871884e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 67ba72c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4084256f6479f4d4d67c4cf0c6e045ed54a7e9d883968077655fa6a188e7e5a,PodSandboxId:424bb8d1df2945e4c7a6543ecea7af6889b52de644565ac54774a8466116fa83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472495480249719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 805a8fdb-ed9e-4f80-a2c9-7d8a0155b228,},Annotations:map[string]string{io.kubernetes.container.hash: 881740b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de092804020ec874ad903cba82425d744cade6acadd234fae7472c54a580e7b,PodSandboxId:0d491e8ede82b38f0c69cd28c624735670d471e8454bbba7ed0ebb55519e9f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472494400846679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hq7zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddb0f99d-a91d-4bb7-96e7-695b6101a601,},Annotations:map[string]string{io.kubernetes.container.hash: 5c1d43b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e3b0cd648694b3f58bf5d849690114c88e9bbf8bb427f3f7a291c723ea4ac,PodSandboxId:d5eb5df2c91fca807a98e2633a3323bc0632af36985b1a5ea834a384058c1ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1720472494099867717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2mdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d70ae2-ed86-49bd-8910-a12c5cd8091a,},Annotations:map[string]string{io.kubernetes.container.hash: 4395f9e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6924e8ced977682d418cea0d436ce49cf79ee382272cb973c8dce7ef6eed6b5,PodSandboxId:a70bc3eb6f4c04162a76fcf65ff5dce7b7a4359f108796f57dd38de4f85e5e9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172047247376840774
4,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd817aef551a1a373ed796646422588,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f55d96f0b61615e83effa00dfff2f7f1cb7042fa84dd01741ec99c489c1cb0b,PodSandboxId:9000c90118635dcdea0100dab133192632f107ee54d7a238d153e5b98fc2fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472473767173365,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df570c9bdb1120e2db1c21b23efdd45,},Annotations:map[string]string{io.kubernetes.container.hash: 1a36d12c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3647b50ce1b4e99d8a409635d93fb22ffbdad34501c3dcbf031498e75ffbab,PodSandboxId:32e80034e1af0e67a39de4df58fe89b2e58887fa59c554adb1298f70c9c2673f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472473712140892,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cecc367fdaa42e3448bb0470688d7b39,},Annotations:map[string]string{io.kubernetes.container.hash: 451cdd04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16947ba6fb46a98e68f1a9f8639e8ceb7d4ce698bbbdc562e43dfbfb921bc130,PodSandboxId:53fa2bbde8261450cb7eb5ad812de328c035611520be6db541d4abc3822737ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472473716706863,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 443e4b2ad13f1980b427a0563ef15fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f15f83be-ca5f-4318-b3d3-c8363c373535 name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.096472714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4228d44-7b61-4258-9ab5-77556dad14d0 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.096619320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4228d44-7b61-4258-9ab5-77556dad14d0 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.098108139Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f501308-b400-4acb-8013-9bf1464c230c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.098745116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473583098720798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f501308-b400-4acb-8013-9bf1464c230c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.099610940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38d5ded3-e038-4ed6-9fa1-cf0a30f0c6ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.099662173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38d5ded3-e038-4ed6-9fa1-cf0a30f0c6ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.099892334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2e3069d015518dcd5e4c0967245dd74359ccdd3a693e5b4e26b330a139e95ab9,PodSandboxId:6afaa0f9dfe4869e8cc4dd4b3b075fdeb333c5e34088f77329936236ede1710a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472495932351496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8msvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38c1e0eb-5eb4-4acb-a5ae-c72871884e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 67ba72c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4084256f6479f4d4d67c4cf0c6e045ed54a7e9d883968077655fa6a188e7e5a,PodSandboxId:424bb8d1df2945e4c7a6543ecea7af6889b52de644565ac54774a8466116fa83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472495480249719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 805a8fdb-ed9e-4f80-a2c9-7d8a0155b228,},Annotations:map[string]string{io.kubernetes.container.hash: 881740b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de092804020ec874ad903cba82425d744cade6acadd234fae7472c54a580e7b,PodSandboxId:0d491e8ede82b38f0c69cd28c624735670d471e8454bbba7ed0ebb55519e9f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472494400846679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hq7zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddb0f99d-a91d-4bb7-96e7-695b6101a601,},Annotations:map[string]string{io.kubernetes.container.hash: 5c1d43b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e3b0cd648694b3f58bf5d849690114c88e9bbf8bb427f3f7a291c723ea4ac,PodSandboxId:d5eb5df2c91fca807a98e2633a3323bc0632af36985b1a5ea834a384058c1ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1720472494099867717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2mdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d70ae2-ed86-49bd-8910-a12c5cd8091a,},Annotations:map[string]string{io.kubernetes.container.hash: 4395f9e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6924e8ced977682d418cea0d436ce49cf79ee382272cb973c8dce7ef6eed6b5,PodSandboxId:a70bc3eb6f4c04162a76fcf65ff5dce7b7a4359f108796f57dd38de4f85e5e9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172047247376840774
4,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd817aef551a1a373ed796646422588,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f55d96f0b61615e83effa00dfff2f7f1cb7042fa84dd01741ec99c489c1cb0b,PodSandboxId:9000c90118635dcdea0100dab133192632f107ee54d7a238d153e5b98fc2fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472473767173365,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df570c9bdb1120e2db1c21b23efdd45,},Annotations:map[string]string{io.kubernetes.container.hash: 1a36d12c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3647b50ce1b4e99d8a409635d93fb22ffbdad34501c3dcbf031498e75ffbab,PodSandboxId:32e80034e1af0e67a39de4df58fe89b2e58887fa59c554adb1298f70c9c2673f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472473712140892,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cecc367fdaa42e3448bb0470688d7b39,},Annotations:map[string]string{io.kubernetes.container.hash: 451cdd04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16947ba6fb46a98e68f1a9f8639e8ceb7d4ce698bbbdc562e43dfbfb921bc130,PodSandboxId:53fa2bbde8261450cb7eb5ad812de328c035611520be6db541d4abc3822737ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472473716706863,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 443e4b2ad13f1980b427a0563ef15fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38d5ded3-e038-4ed6-9fa1-cf0a30f0c6ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.136476812Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d71f90bb-e782-4d47-b898-e6771972c526 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.136551448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d71f90bb-e782-4d47-b898-e6771972c526 name=/runtime.v1.RuntimeService/Version
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.137728540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa094f45-7fd1-45ca-becb-b825b4462947 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.138133112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1720473583138103641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa094f45-7fd1-45ca-becb-b825b4462947 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.138848626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=859ba933-255b-4d34-9062-f51e43a2d6dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.138971085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=859ba933-255b-4d34-9062-f51e43a2d6dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 08 21:19:43 default-k8s-diff-port-071971 crio[728]: time="2024-07-08 21:19:43.139171824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2e3069d015518dcd5e4c0967245dd74359ccdd3a693e5b4e26b330a139e95ab9,PodSandboxId:6afaa0f9dfe4869e8cc4dd4b3b075fdeb333c5e34088f77329936236ede1710a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472495932351496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8msvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38c1e0eb-5eb4-4acb-a5ae-c72871884e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 67ba72c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4084256f6479f4d4d67c4cf0c6e045ed54a7e9d883968077655fa6a188e7e5a,PodSandboxId:424bb8d1df2945e4c7a6543ecea7af6889b52de644565ac54774a8466116fa83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1720472495480249719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 805a8fdb-ed9e-4f80-a2c9-7d8a0155b228,},Annotations:map[string]string{io.kubernetes.container.hash: 881740b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de092804020ec874ad903cba82425d744cade6acadd234fae7472c54a580e7b,PodSandboxId:0d491e8ede82b38f0c69cd28c624735670d471e8454bbba7ed0ebb55519e9f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1720472494400846679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hq7zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddb0f99d-a91d-4bb7-96e7-695b6101a601,},Annotations:map[string]string{io.kubernetes.container.hash: 5c1d43b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e3b0cd648694b3f58bf5d849690114c88e9bbf8bb427f3f7a291c723ea4ac,PodSandboxId:d5eb5df2c91fca807a98e2633a3323bc0632af36985b1a5ea834a384058c1ad0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1720472494099867717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2mdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1d70ae2-ed86-49bd-8910-a12c5cd8091a,},Annotations:map[string]string{io.kubernetes.container.hash: 4395f9e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6924e8ced977682d418cea0d436ce49cf79ee382272cb973c8dce7ef6eed6b5,PodSandboxId:a70bc3eb6f4c04162a76fcf65ff5dce7b7a4359f108796f57dd38de4f85e5e9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172047247376840774
4,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd817aef551a1a373ed796646422588,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f55d96f0b61615e83effa00dfff2f7f1cb7042fa84dd01741ec99c489c1cb0b,PodSandboxId:9000c90118635dcdea0100dab133192632f107ee54d7a238d153e5b98fc2fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1720472473767173365,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6df570c9bdb1120e2db1c21b23efdd45,},Annotations:map[string]string{io.kubernetes.container.hash: 1a36d12c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3647b50ce1b4e99d8a409635d93fb22ffbdad34501c3dcbf031498e75ffbab,PodSandboxId:32e80034e1af0e67a39de4df58fe89b2e58887fa59c554adb1298f70c9c2673f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1720472473712140892,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cecc367fdaa42e3448bb0470688d7b39,},Annotations:map[string]string{io.kubernetes.container.hash: 451cdd04,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16947ba6fb46a98e68f1a9f8639e8ceb7d4ce698bbbdc562e43dfbfb921bc130,PodSandboxId:53fa2bbde8261450cb7eb5ad812de328c035611520be6db541d4abc3822737ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1720472473716706863,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 443e4b2ad13f1980b427a0563ef15fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=859ba933-255b-4d34-9062-f51e43a2d6dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e3069d015518       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   6afaa0f9dfe48       coredns-7db6d8ff4d-8msvk
	e4084256f6479       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   424bb8d1df294       storage-provisioner
	3de092804020e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   0d491e8ede82b       coredns-7db6d8ff4d-hq7zj
	3e4e3b0cd6486       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   18 minutes ago      Running             kube-proxy                0                   d5eb5df2c91fc       kube-proxy-l2mdd
	b6924e8ced977       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   18 minutes ago      Running             kube-scheduler            2                   a70bc3eb6f4c0       kube-scheduler-default-k8s-diff-port-071971
	0f55d96f0b616       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 minutes ago      Running             etcd                      2                   9000c90118635       etcd-default-k8s-diff-port-071971
	16947ba6fb46a       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   18 minutes ago      Running             kube-controller-manager   2                   53fa2bbde8261       kube-controller-manager-default-k8s-diff-port-071971
	3e3647b50ce1b       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   18 minutes ago      Running             kube-apiserver            2                   32e80034e1af0       kube-apiserver-default-k8s-diff-port-071971
	
	
	==> coredns [2e3069d015518dcd5e4c0967245dd74359ccdd3a693e5b4e26b330a139e95ab9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [3de092804020ec874ad903cba82425d744cade6acadd234fae7472c54a580e7b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-071971
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-071971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=default-k8s-diff-port-071971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T21_01_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 21:01:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-071971
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 21:19:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 21:17:01 +0000   Mon, 08 Jul 2024 21:01:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 21:17:01 +0000   Mon, 08 Jul 2024 21:01:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 21:17:01 +0000   Mon, 08 Jul 2024 21:01:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 21:17:01 +0000   Mon, 08 Jul 2024 21:01:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.163
	  Hostname:    default-k8s-diff-port-071971
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9971f0cfcb78465ebb3b469ae22caf80
	  System UUID:                9971f0cf-cb78-465e-bb3b-469ae22caf80
	  Boot ID:                    d6b9f9cb-247a-44ef-8525-631937b2bb57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8msvk                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 coredns-7db6d8ff4d-hq7zj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-default-k8s-diff-port-071971                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-071971             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-071971    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-l2mdd                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-071971             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-569cc877fc-k8vhl                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node default-k8s-diff-port-071971 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-071971 event: Registered Node default-k8s-diff-port-071971 in Controller
	
	
	==> dmesg <==
	[  +0.051149] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041333] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul 8 20:56] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.340585] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.378992] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.689125] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.135370] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.186742] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.159466] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.338648] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.629238] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.069894] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.476423] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.573033] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.324864] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.027136] kauditd_printk_skb: 27 callbacks suppressed
	[Jul 8 21:01] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.686524] systemd-fstab-generator[3577]: Ignoring "noauto" option for root device
	[  +4.750982] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.332042] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[ +14.375769] systemd-fstab-generator[4117]: Ignoring "noauto" option for root device
	[  +0.008630] kauditd_printk_skb: 14 callbacks suppressed
	[Jul 8 21:02] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [0f55d96f0b61615e83effa00dfff2f7f1cb7042fa84dd01741ec99c489c1cb0b] <==
	{"level":"info","ts":"2024-07-08T21:01:15.149365Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31866a174e81d2aa","local-member-id":"3dd8974a0ddcfcd8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:01:15.149434Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:01:15.149451Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T21:01:15.150543Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.163:2379"}
	{"level":"info","ts":"2024-07-08T21:01:15.151347Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T21:11:15.184765Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":709}
	{"level":"info","ts":"2024-07-08T21:11:15.196448Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":709,"took":"11.244982ms","hash":2267833331,"current-db-size-bytes":2265088,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2265088,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-08T21:11:15.19655Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2267833331,"revision":709,"compact-revision":-1}
	{"level":"info","ts":"2024-07-08T21:16:15.193842Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":953}
	{"level":"info","ts":"2024-07-08T21:16:15.198679Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":953,"took":"3.946135ms","hash":2173117068,"current-db-size-bytes":2265088,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1622016,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-08T21:16:15.198793Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2173117068,"revision":953,"compact-revision":709}
	{"level":"info","ts":"2024-07-08T21:17:06.9818Z","caller":"traceutil/trace.go:171","msg":"trace[1797006652] linearizableReadLoop","detail":"{readStateIndex:1445; appliedIndex:1444; }","duration":"152.438528ms","start":"2024-07-08T21:17:06.829303Z","end":"2024-07-08T21:17:06.981742Z","steps":["trace[1797006652] 'read index received'  (duration: 152.173832ms)","trace[1797006652] 'applied index is now lower than readState.Index'  (duration: 263.819µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T21:17:06.98202Z","caller":"traceutil/trace.go:171","msg":"trace[1642201189] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"158.921187ms","start":"2024-07-08T21:17:06.823081Z","end":"2024-07-08T21:17:06.982002Z","steps":["trace[1642201189] 'process raft request'  (duration: 158.442345ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T21:17:06.982272Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.895278ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-08T21:17:06.982396Z","caller":"traceutil/trace.go:171","msg":"trace[362369925] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1240; }","duration":"153.148956ms","start":"2024-07-08T21:17:06.829229Z","end":"2024-07-08T21:17:06.982378Z","steps":["trace[362369925] 'agreement among raft nodes before linearized reading'  (duration: 152.899457ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T21:17:33.991537Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.117467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-08T21:17:33.99167Z","caller":"traceutil/trace.go:171","msg":"trace[2139072297] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1261; }","duration":"101.298418ms","start":"2024-07-08T21:17:33.89036Z","end":"2024-07-08T21:17:33.991659Z","steps":["trace[2139072297] 'agreement among raft nodes before linearized reading'  (duration: 101.104744ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T21:17:33.991399Z","caller":"traceutil/trace.go:171","msg":"trace[1730577696] linearizableReadLoop","detail":"{readStateIndex:1472; appliedIndex:1471; }","duration":"100.906639ms","start":"2024-07-08T21:17:33.890392Z","end":"2024-07-08T21:17:33.991299Z","steps":["trace[1730577696] 'read index received'  (duration: 42.318143ms)","trace[1730577696] 'applied index is now lower than readState.Index'  (duration: 58.58695ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-08T21:17:59.379921Z","caller":"traceutil/trace.go:171","msg":"trace[1844181946] transaction","detail":"{read_only:false; response_revision:1284; number_of_response:1; }","duration":"102.257495ms","start":"2024-07-08T21:17:59.277632Z","end":"2024-07-08T21:17:59.37989Z","steps":["trace[1844181946] 'process raft request'  (duration: 101.829901ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T21:18:27.570784Z","caller":"traceutil/trace.go:171","msg":"trace[332506284] transaction","detail":"{read_only:false; response_revision:1305; number_of_response:1; }","duration":"353.195767ms","start":"2024-07-08T21:18:27.217557Z","end":"2024-07-08T21:18:27.570753Z","steps":["trace[332506284] 'process raft request'  (duration: 352.911249ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T21:18:27.572117Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-08T21:18:27.217537Z","time spent":"353.517863ms","remote":"127.0.0.1:40264","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":692,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ght225ztxbvusixvpnkupei3nu\" mod_revision:1297 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ght225ztxbvusixvpnkupei3nu\" value_size:619 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ght225ztxbvusixvpnkupei3nu\" > >"}
	{"level":"warn","ts":"2024-07-08T21:18:56.083476Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.587337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-08T21:18:56.083716Z","caller":"traceutil/trace.go:171","msg":"trace[1745312225] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1328; }","duration":"167.886671ms","start":"2024-07-08T21:18:55.91579Z","end":"2024-07-08T21:18:56.083677Z","steps":["trace[1745312225] 'range keys from in-memory index tree'  (duration: 167.404743ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-08T21:19:14.107453Z","caller":"traceutil/trace.go:171","msg":"trace[1684555186] transaction","detail":"{read_only:false; response_revision:1343; number_of_response:1; }","duration":"155.25718ms","start":"2024-07-08T21:19:13.952151Z","end":"2024-07-08T21:19:14.107408Z","steps":["trace[1684555186] 'process raft request'  (duration: 155.071687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-08T21:19:14.324844Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.134422ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18219471258504018280 > lease_revoke:<id:7cd890942692391f>","response":"size:27"}
	
	
	==> kernel <==
	 21:19:43 up 23 min,  0 users,  load average: 0.42, 0.45, 0.31
	Linux default-k8s-diff-port-071971 5.10.207 #1 SMP Wed Jul 3 17:51:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3e3647b50ce1b4e99d8a409635d93fb22ffbdad34501c3dcbf031498e75ffbab] <==
	I0708 21:14:17.943324       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:16:16.946191       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:16:16.946310       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0708 21:16:17.947046       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:16:17.947143       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:16:17.947155       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:16:17.947209       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:16:17.947282       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:16:17.948318       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:17:17.947876       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:17:17.948007       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:17:17.948015       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:17:17.949096       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:17:17.949191       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:17:17.949222       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:19:17.948907       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:19:17.949362       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 21:19:17.949398       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0708 21:19:17.949516       1 handler_proxy.go:93] no RequestInfo found in the context
	E0708 21:19:17.949635       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0708 21:19:17.950847       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [16947ba6fb46a98e68f1a9f8639e8ceb7d4ce698bbbdc562e43dfbfb921bc130] <==
	I0708 21:14:04.186793       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:14:33.650053       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:14:34.195999       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:15:03.655654       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:15:04.205801       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:15:33.662335       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:15:34.215406       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:16:03.668556       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:16:04.224157       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:16:33.675221       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:16:34.233497       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:17:03.681395       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:17:04.242929       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:17:33.687088       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:17:34.253202       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0708 21:17:41.707173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="264.363µs"
	I0708 21:17:55.711371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="169.242µs"
	E0708 21:18:03.696944       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:18:04.262850       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:18:33.702771       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:18:34.272675       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:19:03.708356       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:19:04.282032       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0708 21:19:33.720970       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0708 21:19:34.296780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3e4e3b0cd648694b3f58bf5d849690114c88e9bbf8bb427f3f7a291c723ea4ac] <==
	I0708 21:01:34.617748       1 server_linux.go:69] "Using iptables proxy"
	I0708 21:01:34.643158       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.163"]
	I0708 21:01:34.804167       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 21:01:34.804214       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 21:01:34.804231       1 server_linux.go:165] "Using iptables Proxier"
	I0708 21:01:34.813606       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 21:01:34.814275       1 server.go:872] "Version info" version="v1.30.2"
	I0708 21:01:34.814308       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 21:01:34.815750       1 config.go:192] "Starting service config controller"
	I0708 21:01:34.817663       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 21:01:34.817770       1 config.go:101] "Starting endpoint slice config controller"
	I0708 21:01:34.817777       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 21:01:34.818662       1 config.go:319] "Starting node config controller"
	I0708 21:01:34.818688       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 21:01:34.918202       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 21:01:34.927456       1 shared_informer.go:320] Caches are synced for service config
	I0708 21:01:34.927596       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b6924e8ced977682d418cea0d436ce49cf79ee382272cb973c8dce7ef6eed6b5] <==
	W0708 21:01:16.946878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 21:01:16.946907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 21:01:16.946998       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 21:01:16.947029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 21:01:16.947155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 21:01:16.947309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 21:01:16.947340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 21:01:16.947483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 21:01:16.947252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 21:01:16.947536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 21:01:17.859413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 21:01:17.859645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 21:01:18.026179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 21:01:18.026236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 21:01:18.138934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 21:01:18.138984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 21:01:18.192136       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 21:01:18.192205       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 21:01:18.208198       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 21:01:18.208266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 21:01:18.212926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 21:01:18.212985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 21:01:18.250513       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 21:01:18.250622       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0708 21:01:21.138971       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 21:17:19 default-k8s-diff-port-071971 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:17:26 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:17:26.720183    3902 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 08 21:17:26 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:17:26.720272    3902 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 08 21:17:26 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:17:26.720486    3902 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwjn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-k8vhl_kube-system(09f957f3-d76f-4f21-b9a6-e5b249d07e1e): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 08 21:17:26 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:17:26.720539    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:17:41 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:17:41.689986    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:17:55 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:17:55.694226    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:18:08 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:18:08.690053    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:18:19 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:18:19.744798    3902 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:18:19 default-k8s-diff-port-071971 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:18:19 default-k8s-diff-port-071971 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:18:19 default-k8s-diff-port-071971 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:18:19 default-k8s-diff-port-071971 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:18:20 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:18:20.691330    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:18:33 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:18:33.689734    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:18:48 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:18:48.690133    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:19:00 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:19:00.690323    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:19:15 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:19:15.691360    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:19:19 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:19:19.747542    3902 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 21:19:19 default-k8s-diff-port-071971 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 21:19:19 default-k8s-diff-port-071971 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 21:19:19 default-k8s-diff-port-071971 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 21:19:19 default-k8s-diff-port-071971 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 08 21:19:30 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:19:30.690480    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	Jul 08 21:19:41 default-k8s-diff-port-071971 kubelet[3902]: E0708 21:19:41.690204    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8vhl" podUID="09f957f3-d76f-4f21-b9a6-e5b249d07e1e"
	
	
	==> storage-provisioner [e4084256f6479f4d4d67c4cf0c6e045ed54a7e9d883968077655fa6a188e7e5a] <==
	I0708 21:01:35.591791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 21:01:35.623911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 21:01:35.624014       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 21:01:35.644823       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 21:01:35.645028       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071971_e56a7e45-5712-4549-80e8-7683024bf04c!
	I0708 21:01:35.652315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db7d01ea-b577-4a29-80ee-0b856bf5f5f1", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-071971_e56a7e45-5712-4549-80e8-7683024bf04c became leader
	I0708 21:01:35.745265       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071971_e56a7e45-5712-4549-80e8-7683024bf04c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-071971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-k8vhl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-071971 describe pod metrics-server-569cc877fc-k8vhl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-071971 describe pod metrics-server-569cc877fc-k8vhl: exit status 1 (87.53354ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-k8vhl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-071971 describe pod metrics-server-569cc877fc-k8vhl: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.68s)

                                                
                                    

Test pass (253/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.28
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.2/json-events 3.9
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.06
18 TestDownloadOnly/v1.30.2/DeleteAll 0.13
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 72.35
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 136.96
29 TestAddons/parallel/Registry 15.97
31 TestAddons/parallel/InspektorGadget 10.96
33 TestAddons/parallel/HelmTiller 11.97
35 TestAddons/parallel/CSI 91.33
36 TestAddons/parallel/Headlamp 14.07
37 TestAddons/parallel/CloudSpanner 6.64
38 TestAddons/parallel/LocalPath 10.12
39 TestAddons/parallel/NvidiaDevicePlugin 5.61
40 TestAddons/parallel/Yakd 5.01
44 TestAddons/serial/GCPAuth/Namespaces 0.11
46 TestCertOptions 56.63
47 TestCertExpiration 248.02
49 TestForceSystemdFlag 45.07
50 TestForceSystemdEnv 61.88
52 TestKVMDriverInstallOrUpdate 1.49
56 TestErrorSpam/setup 42.66
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.58
60 TestErrorSpam/unpause 1.61
61 TestErrorSpam/stop 5.58
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 99.38
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 39.49
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
73 TestFunctional/serial/CacheCmd/cache/add_local 1.05
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 35.67
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.44
84 TestFunctional/serial/LogsFileCmd 1.5
85 TestFunctional/serial/InvalidService 4.07
87 TestFunctional/parallel/ConfigCmd 0.32
88 TestFunctional/parallel/DashboardCmd 10.35
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 1.09
95 TestFunctional/parallel/ServiceCmdConnect 10.62
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 34.47
99 TestFunctional/parallel/SSHCmd 0.49
100 TestFunctional/parallel/CpCmd 1.47
102 TestFunctional/parallel/FileSync 0.25
103 TestFunctional/parallel/CertSync 1.49
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
111 TestFunctional/parallel/License 0.17
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.76
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.37
119 TestFunctional/parallel/ImageCommands/Setup 1
120 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.82
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.64
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.04
136 TestFunctional/parallel/ServiceCmd/List 0.31
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
139 TestFunctional/parallel/ServiceCmd/Format 0.38
140 TestFunctional/parallel/ServiceCmd/URL 0.36
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
142 TestFunctional/parallel/ProfileCmd/profile_list 0.32
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
144 TestFunctional/parallel/MountCmd/any-port 6.36
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.96
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.49
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.1
149 TestFunctional/parallel/MountCmd/specific-port 1.69
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
151 TestFunctional/delete_addon-resizer_images 0.08
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 196.9
158 TestMultiControlPlane/serial/DeployApp 4.9
159 TestMultiControlPlane/serial/PingHostFromPods 1.22
160 TestMultiControlPlane/serial/AddWorkerNode 46.97
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
163 TestMultiControlPlane/serial/CopyFile 12.7
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.46
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
169 TestMultiControlPlane/serial/DeleteSecondaryNode 17.43
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
172 TestMultiControlPlane/serial/RestartCluster 353.39
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
174 TestMultiControlPlane/serial/AddSecondaryNode 70.52
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 61.85
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.76
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.66
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.38
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 94.05
211 TestMountStart/serial/StartWithMountFirst 24.59
212 TestMountStart/serial/VerifyMountFirst 0.38
213 TestMountStart/serial/StartWithMountSecond 24.1
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.69
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 22.22
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 95.92
223 TestMultiNode/serial/DeployApp2Nodes 3.83
224 TestMultiNode/serial/PingHostFrom2Pods 0.83
225 TestMultiNode/serial/AddNode 41.83
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 7.27
229 TestMultiNode/serial/StopNode 2.36
230 TestMultiNode/serial/StartAfterStop 27.29
232 TestMultiNode/serial/DeleteNode 2.35
234 TestMultiNode/serial/RestartMultiNode 181.08
235 TestMultiNode/serial/ValidateNameConflict 45.94
242 TestScheduledStopUnix 114.38
246 TestRunningBinaryUpgrade 228.44
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 96.5
269 TestNetworkPlugins/group/false 3.25
273 TestNoKubernetes/serial/StartWithStopK8s 39.07
274 TestNoKubernetes/serial/Start 27.04
276 TestPause/serial/Start 104.61
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
278 TestNoKubernetes/serial/ProfileList 1.55
279 TestNoKubernetes/serial/Stop 1.29
280 TestNoKubernetes/serial/StartNoArgs 44.07
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
282 TestPause/serial/SecondStartNoReconfiguration 43.8
283 TestPause/serial/Pause 0.94
284 TestPause/serial/VerifyStatus 0.31
286 TestPause/serial/Unpause 0.92
288 TestPause/serial/PauseAgain 1.09
289 TestPause/serial/DeletePaused 1.5
290 TestPause/serial/VerifyDeletedResources 0.53
292 TestStartStop/group/no-preload/serial/FirstStart 75.11
294 TestStartStop/group/embed-certs/serial/FirstStart 111.95
295 TestStartStop/group/no-preload/serial/DeployApp 9.31
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
298 TestStartStop/group/old-k8s-version/serial/Stop 4.35
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
301 TestStartStop/group/embed-certs/serial/DeployApp 8.28
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.4
307 TestStartStop/group/no-preload/serial/SecondStart 632.36
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
312 TestStartStop/group/embed-certs/serial/SecondStart 571.75
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 481.69
323 TestStoppedBinaryUpgrade/Setup 0.57
324 TestStoppedBinaryUpgrade/Upgrade 101.5
326 TestStartStop/group/newest-cni/serial/FirstStart 69.69
327 TestNetworkPlugins/group/auto/Start 102.55
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.7
330 TestStartStop/group/newest-cni/serial/Stop 7.36
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
332 TestStartStop/group/newest-cni/serial/SecondStart 50.02
333 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
334 TestNetworkPlugins/group/kindnet/Start 84.23
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
338 TestStartStop/group/newest-cni/serial/Pause 2.58
339 TestNetworkPlugins/group/calico/Start 86.78
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.92
342 TestNetworkPlugins/group/custom-flannel/Start 88.7
343 TestNetworkPlugins/group/auto/KubeletFlags 0.25
344 TestNetworkPlugins/group/auto/NetCatPod 12.32
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/auto/DNS 0.21
347 TestNetworkPlugins/group/auto/Localhost 0.13
348 TestNetworkPlugins/group/auto/HairPin 0.17
349 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
350 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
351 TestNetworkPlugins/group/kindnet/DNS 0.26
352 TestNetworkPlugins/group/kindnet/Localhost 0.17
353 TestNetworkPlugins/group/kindnet/HairPin 0.21
354 TestNetworkPlugins/group/enable-default-cni/Start 102.12
355 TestNetworkPlugins/group/flannel/Start 97.08
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.23
358 TestNetworkPlugins/group/calico/NetCatPod 12.3
359 TestNetworkPlugins/group/calico/DNS 0.2
360 TestNetworkPlugins/group/calico/Localhost 0.17
361 TestNetworkPlugins/group/calico/HairPin 0.17
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
364 TestNetworkPlugins/group/custom-flannel/DNS 0.24
365 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
366 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
367 TestNetworkPlugins/group/bridge/Start 66.28
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
370 TestNetworkPlugins/group/flannel/ControllerPod 6.01
371 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
372 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
373 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
375 TestNetworkPlugins/group/flannel/NetCatPod 11.23
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
377 TestNetworkPlugins/group/bridge/NetCatPod 10.21
378 TestNetworkPlugins/group/flannel/DNS 0.16
379 TestNetworkPlugins/group/flannel/Localhost 0.15
380 TestNetworkPlugins/group/flannel/HairPin 0.13
381 TestNetworkPlugins/group/bridge/DNS 0.16
382 TestNetworkPlugins/group/bridge/Localhost 0.15
383 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (7.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-548391 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-548391 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.279022189s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-548391
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-548391: exit status 85 (56.936291ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-548391 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC |          |
	|         | -p download-only-548391        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 19:29:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 19:29:00.159370   13153 out.go:291] Setting OutFile to fd 1 ...
	I0708 19:29:00.159607   13153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:29:00.159616   13153 out.go:304] Setting ErrFile to fd 2...
	I0708 19:29:00.159621   13153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:29:00.159797   13153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	W0708 19:29:00.159924   13153 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19195-5988/.minikube/config/config.json: open /home/jenkins/minikube-integration/19195-5988/.minikube/config/config.json: no such file or directory
	I0708 19:29:00.160461   13153 out.go:298] Setting JSON to true
	I0708 19:29:00.161298   13153 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":689,"bootTime":1720466251,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 19:29:00.161355   13153 start.go:139] virtualization: kvm guest
	I0708 19:29:00.163714   13153 out.go:97] [download-only-548391] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0708 19:29:00.163820   13153 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball: no such file or directory
	I0708 19:29:00.163853   13153 notify.go:220] Checking for updates...
	I0708 19:29:00.165616   13153 out.go:169] MINIKUBE_LOCATION=19195
	I0708 19:29:00.167304   13153 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 19:29:00.168649   13153 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:29:00.170221   13153 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:29:00.171747   13153 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0708 19:29:00.174478   13153 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0708 19:29:00.174707   13153 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 19:29:00.273963   13153 out.go:97] Using the kvm2 driver based on user configuration
	I0708 19:29:00.273993   13153 start.go:297] selected driver: kvm2
	I0708 19:29:00.274000   13153 start.go:901] validating driver "kvm2" against <nil>
	I0708 19:29:00.274333   13153 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 19:29:00.274453   13153 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19195-5988/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0708 19:29:00.289600   13153 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0708 19:29:00.289672   13153 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 19:29:00.290352   13153 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0708 19:29:00.290546   13153 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 19:29:00.290627   13153 cni.go:84] Creating CNI manager for ""
	I0708 19:29:00.290645   13153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0708 19:29:00.290656   13153 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 19:29:00.290731   13153 start.go:340] cluster config:
	{Name:download-only-548391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-548391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:29:00.290974   13153 iso.go:125] acquiring lock: {Name:mkb5cc5061ba7accede97e12b0ec4ee3df03bec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 19:29:00.293044   13153 out.go:97] Downloading VM boot image ...
	I0708 19:29:00.293091   13153 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19195-5988/.minikube/cache/iso/amd64/minikube-v1.33.1-1720011972-19186-amd64.iso
	I0708 19:29:02.944875   13153 out.go:97] Starting "download-only-548391" primary control-plane node in "download-only-548391" cluster
	I0708 19:29:02.944901   13153 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0708 19:29:02.964967   13153 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0708 19:29:02.965001   13153 cache.go:56] Caching tarball of preloaded images
	I0708 19:29:02.965170   13153 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0708 19:29:02.967214   13153 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0708 19:29:02.967251   13153 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0708 19:29:02.992392   13153 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19195-5988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-548391 host does not exist
	  To start a cluster, run: "minikube start -p download-only-548391"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-548391
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (3.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-972529 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-972529 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.903581831s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (3.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-972529
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-972529: exit status 85 (56.177525ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-548391 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC |                     |
	|         | -p download-only-548391        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| delete  | -p download-only-548391        | download-only-548391 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC | 08 Jul 24 19:29 UTC |
	| start   | -o=json --download-only        | download-only-972529 | jenkins | v1.33.1 | 08 Jul 24 19:29 UTC |                     |
	|         | -p download-only-972529        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 19:29:07
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 19:29:07.749786   13346 out.go:291] Setting OutFile to fd 1 ...
	I0708 19:29:07.750044   13346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:29:07.750055   13346 out.go:304] Setting ErrFile to fd 2...
	I0708 19:29:07.750059   13346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:29:07.750250   13346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 19:29:07.750803   13346 out.go:298] Setting JSON to true
	I0708 19:29:07.751623   13346 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":697,"bootTime":1720466251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 19:29:07.751681   13346 start.go:139] virtualization: kvm guest
	I0708 19:29:07.753783   13346 out.go:97] [download-only-972529] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 19:29:07.753918   13346 notify.go:220] Checking for updates...
	I0708 19:29:07.755264   13346 out.go:169] MINIKUBE_LOCATION=19195
	I0708 19:29:07.756645   13346 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 19:29:07.757986   13346 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:29:07.759255   13346 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:29:07.760556   13346 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-972529 host does not exist
	  To start a cluster, run: "minikube start -p download-only-972529"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-972529
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-230858 --alsologtostderr --binary-mirror http://127.0.0.1:39545 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-230858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-230858
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (72.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-558526 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-558526 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m11.377828482s)
helpers_test.go:175: Cleaning up "offline-crio-558526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-558526
--- PASS: TestOffline (72.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-268316
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-268316: exit status 85 (54.146933ms)

                                                
                                                
-- stdout --
	* Profile "addons-268316" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-268316"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-268316
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-268316: exit status 85 (53.177035ms)

                                                
                                                
-- stdout --
	* Profile "addons-268316" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-268316"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (136.96s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-268316 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-268316 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m16.964638271s)
--- PASS: TestAddons/Setup (136.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 19.50726ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-g8hs8" [36f4018c-5097-47ad-b3e0-a8a225032ab3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005262895s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rrxb2" [ebfad772-c807-408a-81ef-0f5d1ad1b929] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005344278s
addons_test.go:342: (dbg) Run:  kubectl --context addons-268316 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-268316 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-268316 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.181767355s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 ip
2024/07/08 19:31:45 [DEBUG] GET http://192.168.39.231:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.97s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zf99n" [96bc9c70-fdfd-4f54-ad90-e183ca915ac1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003612241s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-268316
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-268316: (5.959651586s)
--- PASS: TestAddons/parallel/InspektorGadget (10.96s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 19.503443ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-lmtgw" [785aba76-863a-4bd2-a24f-c7eaa42f49b4] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015004776s
addons_test.go:475: (dbg) Run:  kubectl --context addons-268316 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-268316 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.227882944s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (91.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 23.733056ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-268316 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-268316 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2ec74f1a-5d81-40af-aa0d-754f967611bc] Pending
helpers_test.go:344: "task-pv-pod" [2ec74f1a-5d81-40af-aa0d-754f967611bc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2ec74f1a-5d81-40af-aa0d-754f967611bc] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003937073s
addons_test.go:586: (dbg) Run:  kubectl --context addons-268316 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-268316 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-268316 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-268316 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-268316 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-268316 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-268316 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [94cd92d3-22d2-42c6-ac73-f82732c67384] Pending
helpers_test.go:344: "task-pv-pod-restore" [94cd92d3-22d2-42c6-ac73-f82732c67384] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [94cd92d3-22d2-42c6-ac73-f82732c67384] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004382179s
addons_test.go:628: (dbg) Run:  kubectl --context addons-268316 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-268316 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-268316 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-268316 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.847276563s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (91.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-268316 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-268316 --alsologtostderr -v=1: (1.063913711s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-cgkpr" [61b3fef5-b549-4aab-a5f7-da35eb3d4477] Pending
helpers_test.go:344: "headlamp-7867546754-cgkpr" [61b3fef5-b549-4aab-a5f7-da35eb3d4477] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-cgkpr" [61b3fef5-b549-4aab-a5f7-da35eb3d4477] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004091836s
--- PASS: TestAddons/parallel/Headlamp (14.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-jcxcf" [cfee911a-10a3-4367-92d0-844e9d13b386] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003474182s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-268316
--- PASS: TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-268316 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-268316 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268316 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9d52df83-461e-4e6a-ae0a-f70e5ac29e83] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9d52df83-461e-4e6a-ae0a-f70e5ac29e83] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9d52df83-461e-4e6a-ae0a-f70e5ac29e83] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004745567s
addons_test.go:992: (dbg) Run:  kubectl --context addons-268316 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 ssh "cat /opt/local-path-provisioner/pvc-fe0dcfdc-b3e9-41ce-a1cc-00fdfd88c367_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-268316 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-268316 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-268316 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-s4n9d" [bd2137b3-9f97-4991-91e6-20ab23e68c75] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006247758s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-268316
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-rf6p2" [3ac6741a-bec9-4f29-a6eb-c73c7500970b] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004558798s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-268316 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-268316 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (56.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-059722 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0708 20:46:29.733152   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-059722 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (55.328774276s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-059722 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-059722 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-059722 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-059722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-059722
--- PASS: TestCertOptions (56.63s)

                                                
                                    
x
+
TestCertExpiration (248.02s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-112887 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-112887 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (49.04491272s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-112887 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-112887 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (17.96209997s)
helpers_test.go:175: Cleaning up "cert-expiration-112887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-112887
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-112887: (1.012939483s)
--- PASS: TestCertExpiration (248.02s)

                                                
                                    
x
+
TestForceSystemdFlag (45.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-221360 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-221360 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.844253031s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-221360 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-221360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-221360
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-221360: (1.031500849s)
--- PASS: TestForceSystemdFlag (45.07s)

                                                
                                    
x
+
TestForceSystemdEnv (61.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-897719 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-897719 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.892528061s)
helpers_test.go:175: Cleaning up "force-systemd-env-897719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-897719
--- PASS: TestForceSystemdEnv (61.88s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.49s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.49s)

                                                
                                    
x
+
TestErrorSpam/setup (42.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-784677 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-784677 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-784677 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-784677 --driver=kvm2  --container-runtime=crio: (42.662414406s)
--- PASS: TestErrorSpam/setup (42.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (5.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 stop: (2.280628073s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 stop: (1.868288496s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-784677 --log_dir /tmp/nospam-784677 stop: (1.430322181s)
--- PASS: TestErrorSpam/stop (5.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19195-5988/.minikube/files/etc/test/nested/copy/13141/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787563 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0708 19:41:29.734685   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:29.740494   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:29.750774   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:29.771034   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:29.811386   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:29.891785   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:30.052267   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:30.372992   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:31.013991   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:32.294489   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:34.856373   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:39.977015   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:41:50.217984   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:42:10.699108   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:42:51.660567   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-787563 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m39.376510214s)
--- PASS: TestFunctional/serial/StartWithProxy (99.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787563 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-787563 --alsologtostderr -v=8: (39.493076412s)
functional_test.go:659: soft start took 39.493733698s for "functional-787563" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-787563 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 cache add registry.k8s.io/pause:3.3: (1.160169475s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 cache add registry.k8s.io/pause:latest: (1.060591993s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-787563 /tmp/TestFunctionalserialCacheCmdcacheadd_local1565118789/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 cache add minikube-local-cache-test:functional-787563
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 cache delete minikube-local-cache-test:functional-787563
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-787563
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787563 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.292816ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 kubectl -- --context functional-787563 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-787563 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787563 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0708 19:44:13.581543   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-787563 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.66953524s)
functional_test.go:757: restart took 35.669637513s for "functional-787563" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-787563 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 logs: (1.439022718s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 logs --file /tmp/TestFunctionalserialLogsFileCmd2327382624/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 logs --file /tmp/TestFunctionalserialLogsFileCmd2327382624/001/logs.txt: (1.496678157s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-787563 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-787563
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-787563: exit status 115 (287.979489ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.54:31038 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-787563 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787563 config get cpus: exit status 14 (47.58361ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787563 config get cpus: exit status 14 (50.90163ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-787563 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-787563 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22877: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787563 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-787563 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.422132ms)

                                                
                                                
-- stdout --
	* [functional-787563] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19195
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 19:44:47.981338   23130 out.go:291] Setting OutFile to fd 1 ...
	I0708 19:44:47.981461   23130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:44:47.981472   23130 out.go:304] Setting ErrFile to fd 2...
	I0708 19:44:47.981479   23130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:44:47.981773   23130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 19:44:47.982434   23130 out.go:298] Setting JSON to false
	I0708 19:44:47.983643   23130 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1637,"bootTime":1720466251,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 19:44:47.983733   23130 start.go:139] virtualization: kvm guest
	I0708 19:44:47.985983   23130 out.go:177] * [functional-787563] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 19:44:47.987435   23130 notify.go:220] Checking for updates...
	I0708 19:44:47.987479   23130 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 19:44:47.988890   23130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 19:44:47.990264   23130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:44:47.991698   23130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:44:47.993281   23130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 19:44:47.994732   23130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 19:44:47.996636   23130 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:44:47.997066   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:44:47.997139   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:44:48.012523   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
	I0708 19:44:48.013049   23130 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:44:48.013622   23130 main.go:141] libmachine: Using API Version  1
	I0708 19:44:48.013642   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:44:48.014032   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:44:48.014234   23130 main.go:141] libmachine: (functional-787563) Calling .DriverName
	I0708 19:44:48.014517   23130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 19:44:48.014848   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:44:48.014882   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:44:48.030706   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44293
	I0708 19:44:48.031214   23130 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:44:48.031822   23130 main.go:141] libmachine: Using API Version  1
	I0708 19:44:48.031848   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:44:48.032166   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:44:48.032384   23130 main.go:141] libmachine: (functional-787563) Calling .DriverName
	I0708 19:44:48.068905   23130 out.go:177] * Using the kvm2 driver based on existing profile
	I0708 19:44:48.070213   23130 start.go:297] selected driver: kvm2
	I0708 19:44:48.070228   23130 start.go:901] validating driver "kvm2" against &{Name:functional-787563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-787563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:44:48.070338   23130 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 19:44:48.072539   23130 out.go:177] 
	W0708 19:44:48.073883   23130 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0708 19:44:48.075129   23130 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787563 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-787563 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-787563 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.755874ms)

                                                
                                                
-- stdout --
	* [functional-787563] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19195
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 19:44:44.167058   22632 out.go:291] Setting OutFile to fd 1 ...
	I0708 19:44:44.167165   22632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:44:44.167171   22632 out.go:304] Setting ErrFile to fd 2...
	I0708 19:44:44.167175   22632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 19:44:44.167427   22632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 19:44:44.167996   22632 out.go:298] Setting JSON to false
	I0708 19:44:44.168916   22632 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1633,"bootTime":1720466251,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 19:44:44.168974   22632 start.go:139] virtualization: kvm guest
	I0708 19:44:44.171179   22632 out.go:177] * [functional-787563] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0708 19:44:44.172545   22632 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 19:44:44.172580   22632 notify.go:220] Checking for updates...
	I0708 19:44:44.174910   22632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 19:44:44.176267   22632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 19:44:44.177474   22632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 19:44:44.178821   22632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 19:44:44.180140   22632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 19:44:44.181643   22632 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 19:44:44.182054   22632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:44:44.182102   22632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:44:44.196770   22632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44657
	I0708 19:44:44.197131   22632 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:44:44.197782   22632 main.go:141] libmachine: Using API Version  1
	I0708 19:44:44.197801   22632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:44:44.198109   22632 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:44:44.198314   22632 main.go:141] libmachine: (functional-787563) Calling .DriverName
	I0708 19:44:44.198599   22632 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 19:44:44.198929   22632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 19:44:44.198968   22632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 19:44:44.215996   22632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45713
	I0708 19:44:44.216376   22632 main.go:141] libmachine: () Calling .GetVersion
	I0708 19:44:44.216905   22632 main.go:141] libmachine: Using API Version  1
	I0708 19:44:44.216927   22632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 19:44:44.217270   22632 main.go:141] libmachine: () Calling .GetMachineName
	I0708 19:44:44.217479   22632 main.go:141] libmachine: (functional-787563) Calling .DriverName
	I0708 19:44:44.254551   22632 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0708 19:44:44.256034   22632 start.go:297] selected driver: kvm2
	I0708 19:44:44.256046   22632 start.go:901] validating driver "kvm2" against &{Name:functional-787563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-787563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 19:44:44.256162   22632 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 19:44:44.258343   22632 out.go:177] 
	W0708 19:44:44.259692   22632 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0708 19:44:44.261004   22632 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-787563 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-787563 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-vp8fl" [fe69e989-9c23-422b-9945-e0ce4e0bc5cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-vp8fl" [fe69e989-9c23-422b-9945-e0ce4e0bc5cf] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.032176518s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.54:31092
functional_test.go:1671: http://192.168.39.54:31092: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-vp8fl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.54:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.54:31092
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2fc86579-517b-4253-a3bb-7b8b65b1c67c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004996561s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-787563 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-787563 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-787563 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-787563 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c73fd792-2ef8-4902-ad52-83176ded9e43] Pending
helpers_test.go:344: "sp-pod" [c73fd792-2ef8-4902-ad52-83176ded9e43] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c73fd792-2ef8-4902-ad52-83176ded9e43] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.021664124s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-787563 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-787563 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-787563 delete -f testdata/storage-provisioner/pod.yaml: (1.395520649s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-787563 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [68b519c2-8ffe-49fe-b672-d7f3da891367] Pending
helpers_test.go:344: "sp-pod" [68b519c2-8ffe-49fe-b672-d7f3da891367] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [68b519c2-8ffe-49fe-b672-d7f3da891367] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004589452s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-787563 exec sp-pod -- ls /tmp/mount
E0708 19:46:29.732903   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:46:57.422217   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:51:29.733644   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh -n functional-787563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 cp functional-787563:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2976022198/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh -n functional-787563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh -n functional-787563 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13141/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo cat /etc/test/nested/copy/13141/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13141.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo cat /etc/ssl/certs/13141.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13141.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo cat /usr/share/ca-certificates/13141.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/131412.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo cat /etc/ssl/certs/131412.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/131412.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo cat /usr/share/ca-certificates/131412.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-787563 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787563 ssh "sudo systemctl is-active docker": exit status 1 (243.922156ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787563 ssh "sudo systemctl is-active containerd": exit status 1 (240.490749ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-787563 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-787563
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-787563
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240513-cd2ac642
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-787563 image ls --format short --alsologtostderr:
I0708 19:44:49.342813   23293 out.go:291] Setting OutFile to fd 1 ...
I0708 19:44:49.343088   23293 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 19:44:49.343098   23293 out.go:304] Setting ErrFile to fd 2...
I0708 19:44:49.343102   23293 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 19:44:49.343321   23293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
I0708 19:44:49.343954   23293 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0708 19:44:49.344051   23293 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0708 19:44:49.344413   23293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0708 19:44:49.344457   23293 main.go:141] libmachine: Launching plugin server for driver kvm2
I0708 19:44:49.359390   23293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
I0708 19:44:49.359901   23293 main.go:141] libmachine: () Calling .GetVersion
I0708 19:44:49.360491   23293 main.go:141] libmachine: Using API Version  1
I0708 19:44:49.360519   23293 main.go:141] libmachine: () Calling .SetConfigRaw
I0708 19:44:49.360901   23293 main.go:141] libmachine: () Calling .GetMachineName
I0708 19:44:49.361106   23293 main.go:141] libmachine: (functional-787563) Calling .GetState
I0708 19:44:49.362854   23293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0708 19:44:49.362888   23293 main.go:141] libmachine: Launching plugin server for driver kvm2
I0708 19:44:49.377753   23293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
I0708 19:44:49.378137   23293 main.go:141] libmachine: () Calling .GetVersion
I0708 19:44:49.378582   23293 main.go:141] libmachine: Using API Version  1
I0708 19:44:49.378603   23293 main.go:141] libmachine: () Calling .SetConfigRaw
I0708 19:44:49.378907   23293 main.go:141] libmachine: () Calling .GetMachineName
I0708 19:44:49.379106   23293 main.go:141] libmachine: (functional-787563) Calling .DriverName
I0708 19:44:49.379281   23293 ssh_runner.go:195] Run: systemctl --version
I0708 19:44:49.379302   23293 main.go:141] libmachine: (functional-787563) Calling .GetSSHHostname
I0708 19:44:49.381879   23293 main.go:141] libmachine: (functional-787563) DBG | domain functional-787563 has defined MAC address 52:54:00:76:c0:f8 in network mk-functional-787563
I0708 19:44:49.382302   23293 main.go:141] libmachine: (functional-787563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:c0:f8", ip: ""} in network mk-functional-787563: {Iface:virbr1 ExpiryTime:2024-07-08 20:41:29 +0000 UTC Type:0 Mac:52:54:00:76:c0:f8 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-787563 Clientid:01:52:54:00:76:c0:f8}
I0708 19:44:49.382337   23293 main.go:141] libmachine: (functional-787563) DBG | domain functional-787563 has defined IP address 192.168.39.54 and MAC address 52:54:00:76:c0:f8 in network mk-functional-787563
I0708 19:44:49.382446   23293 main.go:141] libmachine: (functional-787563) Calling .GetSSHPort
I0708 19:44:49.382614   23293 main.go:141] libmachine: (functional-787563) Calling .GetSSHKeyPath
I0708 19:44:49.382757   23293 main.go:141] libmachine: (functional-787563) Calling .GetSSHUsername
I0708 19:44:49.382899   23293 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/functional-787563/id_rsa Username:docker}
I0708 19:44:49.483181   23293 ssh_runner.go:195] Run: sudo crictl images --output json
I0708 19:44:49.549876   23293 main.go:141] libmachine: Making call to close driver server
I0708 19:44:49.549889   23293 main.go:141] libmachine: (functional-787563) Calling .Close
I0708 19:44:49.550151   23293 main.go:141] libmachine: Successfully made call to close driver server
I0708 19:44:49.550171   23293 main.go:141] libmachine: Making call to close connection to plugin binary
I0708 19:44:49.550180   23293 main.go:141] libmachine: Making call to close driver server
I0708 19:44:49.550189   23293 main.go:141] libmachine: (functional-787563) Calling .Close
I0708 19:44:49.550415   23293 main.go:141] libmachine: Successfully made call to close driver server
I0708 19:44:49.550440   23293 main.go:141] libmachine: (functional-787563) DBG | Closing plugin on server side
I0708 19:44:49.550457   23293 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-787563 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/google-containers/addon-resizer  | functional-787563  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                 | latest             | fffffc90d343c | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-787563  | 1a9b40a9c3643 | 3.33kB |
| localhost/my-image                      | functional-787563  | 1316b718ec78b | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | ac1c61439df46 | 65.9MB |
| registry.k8s.io/kube-scheduler          | v1.30.2            | 7820c83aa1394 | 63.1MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 56ce0fd9fb532 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.2            | 53c535741fb44 | 86MB   |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e874818b3caac | 112MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-787563 image ls --format table --alsologtostderr:
I0708 19:44:54.477890   23458 out.go:291] Setting OutFile to fd 1 ...
I0708 19:44:54.478173   23458 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 19:44:54.478186   23458 out.go:304] Setting ErrFile to fd 2...
I0708 19:44:54.478190   23458 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 19:44:54.478380   23458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
I0708 19:44:54.478957   23458 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0708 19:44:54.479055   23458 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0708 19:44:54.479408   23458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0708 19:44:54.479487   23458 main.go:141] libmachine: Launching plugin server for driver kvm2
I0708 19:44:54.494250   23458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
I0708 19:44:54.494682   23458 main.go:141] libmachine: () Calling .GetVersion
I0708 19:44:54.495434   23458 main.go:141] libmachine: Using API Version  1
I0708 19:44:54.495475   23458 main.go:141] libmachine: () Calling .SetConfigRaw
I0708 19:44:54.495812   23458 main.go:141] libmachine: () Calling .GetMachineName
I0708 19:44:54.495989   23458 main.go:141] libmachine: (functional-787563) Calling .GetState
I0708 19:44:54.498038   23458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0708 19:44:54.498081   23458 main.go:141] libmachine: Launching plugin server for driver kvm2
I0708 19:44:54.513572   23458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
I0708 19:44:54.514053   23458 main.go:141] libmachine: () Calling .GetVersion
I0708 19:44:54.514586   23458 main.go:141] libmachine: Using API Version  1
I0708 19:44:54.514618   23458 main.go:141] libmachine: () Calling .SetConfigRaw
I0708 19:44:54.515018   23458 main.go:141] libmachine: () Calling .GetMachineName
I0708 19:44:54.515271   23458 main.go:141] libmachine: (functional-787563) Calling .DriverName
I0708 19:44:54.515518   23458 ssh_runner.go:195] Run: systemctl --version
I0708 19:44:54.515542   23458 main.go:141] libmachine: (functional-787563) Calling .GetSSHHostname
I0708 19:44:54.518189   23458 main.go:141] libmachine: (functional-787563) DBG | domain functional-787563 has defined MAC address 52:54:00:76:c0:f8 in network mk-functional-787563
I0708 19:44:54.518503   23458 main.go:141] libmachine: (functional-787563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:c0:f8", ip: ""} in network mk-functional-787563: {Iface:virbr1 ExpiryTime:2024-07-08 20:41:29 +0000 UTC Type:0 Mac:52:54:00:76:c0:f8 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-787563 Clientid:01:52:54:00:76:c0:f8}
I0708 19:44:54.518535   23458 main.go:141] libmachine: (functional-787563) DBG | domain functional-787563 has defined IP address 192.168.39.54 and MAC address 52:54:00:76:c0:f8 in network mk-functional-787563
I0708 19:44:54.518702   23458 main.go:141] libmachine: (functional-787563) Calling .GetSSHPort
I0708 19:44:54.518874   23458 main.go:141] libmachine: (functional-787563) Calling .GetSSHKeyPath
I0708 19:44:54.519105   23458 main.go:141] libmachine: (functional-787563) Calling .GetSSHUsername
I0708 19:44:54.519241   23458 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/functional-787563/id_rsa Username:docker}
I0708 19:44:54.602418   23458 ssh_runner.go:195] Run: sudo crictl images --output json
I0708 19:44:54.646982   23458 main.go:141] libmachine: Making call to close driver server
I0708 19:44:54.646999   23458 main.go:141] libmachine: (functional-787563) Calling .Close
I0708 19:44:54.647324   23458 main.go:141] libmachine: (functional-787563) DBG | Closing plugin on server side
I0708 19:44:54.647329   23458 main.go:141] libmachine: Successfully made call to close driver server
I0708 19:44:54.647347   23458 main.go:141] libmachine: Making call to close connection to plugin binary
I0708 19:44:54.647358   23458 main.go:141] libmachine: Making call to close driver server
I0708 19:44:54.647364   23458 main.go:141] libmachine: (functional-787563) Calling .Close
I0708 19:44:54.647569   23458 main.go:141] libmachine: Successfully made call to close driver server
I0708 19:44:54.647583   23458 main.go:141] libmachine: Making call to close connection to plugin binary
I0708 19:44:54.647599   23458 main.go:141] libmachine: (functional-787563) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-787563 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"1a9b40a9c3643c0085601e85e8ca7444f582278ac54a703644858a5f1f2959e6","repoDigests":["localhost/minikube-local-cache-test@sha256:47b8a7d19bb81b562f3bbacf4e2c0370f3aacdc02afa7fd514a85810de390832"],"repoTags":["localhost/minikube-local-cache-test:functional-787563"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"35
0b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":
["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"1316b718ec78b81d5be688ad182fa56ea718ac7945c6cf1406005c99a80df61a","repoDigests":["localhost/my-image@sha256:905a1571ae2bbf93198efa89686e14df1f3cc47a970326db98029928a26dea97"],"repoTags":["localhost/my-image:functional-787563"],"size":"1468600"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@s
ha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"63051080"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a
8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"65908273"},{"id":"efd3555ff1ca22f6e47712bc0e79df2c8a6d8fe7fe872c6bff4ed64608cd80fa","repoDigests":["docker.io/library/c81330c2458918be26e095199d8be8c8805a21daa610b268d13323f3fb04e16d-tmp@sha256:e4ab9ded609da94939005a392c03ab2339daf89071de2b3edfdcbb471c07805d"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846
fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117609954"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244"],"repoTags":["docker.io/library/nginx:latest"],"size":"191746190"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:
functional-787563"],"size":"34114467"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"112194888"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy
:v1.30.2"],"size":"85953433"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-787563 image ls --format json --alsologtostderr:
I0708 19:44:54.253949   23434 out.go:291] Setting OutFile to fd 1 ...
I0708 19:44:54.254185   23434 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 19:44:54.254193   23434 out.go:304] Setting ErrFile to fd 2...
I0708 19:44:54.254197   23434 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 19:44:54.254371   23434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
I0708 19:44:54.254927   23434 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0708 19:44:54.255019   23434 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0708 19:44:54.255367   23434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0708 19:44:54.255408   23434 main.go:141] libmachine: Launching plugin server for driver kvm2
I0708 19:44:54.270934   23434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42075
I0708 19:44:54.271474   23434 main.go:141] libmachine: () Calling .GetVersion
I0708 19:44:54.272016   23434 main.go:141] libmachine: Using API Version  1
I0708 19:44:54.272036   23434 main.go:141] libmachine: () Calling .SetConfigRaw
I0708 19:44:54.272400   23434 main.go:141] libmachine: () Calling .GetMachineName
I0708 19:44:54.272680   23434 main.go:141] libmachine: (functional-787563) Calling .GetState
I0708 19:44:54.274629   23434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0708 19:44:54.274674   23434 main.go:141] libmachine: Launching plugin server for driver kvm2
I0708 19:44:54.289328   23434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
I0708 19:44:54.289681   23434 main.go:141] libmachine: () Calling .GetVersion
I0708 19:44:54.290082   23434 main.go:141] libmachine: Using API Version  1
I0708 19:44:54.290099   23434 main.go:141] libmachine: () Calling .SetConfigRaw
I0708 19:44:54.290399   23434 main.go:141] libmachine: () Calling .GetMachineName
I0708 19:44:54.290573   23434 main.go:141] libmachine: (functional-787563) Calling .DriverName
I0708 19:44:54.290811   23434 ssh_runner.go:195] Run: systemctl --version
I0708 19:44:54.290839   23434 main.go:141] libmachine: (functional-787563) Calling .GetSSHHostname
I0708 19:44:54.293719   23434 main.go:141] libmachine: (functional-787563) DBG | domain functional-787563 has defined MAC address 52:54:00:76:c0:f8 in network mk-functional-787563
I0708 19:44:54.294068   23434 main.go:141] libmachine: (functional-787563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:c0:f8", ip: ""} in network mk-functional-787563: {Iface:virbr1 ExpiryTime:2024-07-08 20:41:29 +0000 UTC Type:0 Mac:52:54:00:76:c0:f8 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-787563 Clientid:01:52:54:00:76:c0:f8}
I0708 19:44:54.294109   23434 main.go:141] libmachine: (functional-787563) DBG | domain functional-787563 has defined IP address 192.168.39.54 and MAC address 52:54:00:76:c0:f8 in network mk-functional-787563
I0708 19:44:54.294247   23434 main.go:141] libmachine: (functional-787563) Calling .GetSSHPort
I0708 19:44:54.294442   23434 main.go:141] libmachine: (functional-787563) Calling .GetSSHKeyPath
I0708 19:44:54.294624   23434 main.go:141] libmachine: (functional-787563) Calling .GetSSHUsername
I0708 19:44:54.294819   23434 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/functional-787563/id_rsa Username:docker}
I0708 19:44:54.377927   23434 ssh_runner.go:195] Run: sudo crictl images --output json
I0708 19:44:54.423338   23434 main.go:141] libmachine: Making call to close driver server
I0708 19:44:54.423354   23434 main.go:141] libmachine: (functional-787563) Calling .Close
I0708 19:44:54.423650   23434 main.go:141] libmachine: Successfully made call to close driver server
I0708 19:44:54.423690   23434 main.go:141] libmachine: Making call to close connection to plugin binary
I0708 19:44:54.423695   23434 main.go:141] libmachine: (functional-787563) DBG | Closing plugin on server side
I0708 19:44:54.423699   23434 main.go:141] libmachine: Making call to close driver server
I0708 19:44:54.423708   23434 main.go:141] libmachine: (functional-787563) Calling .Close
I0708 19:44:54.423957   23434 main.go:141] libmachine: Successfully made call to close driver server
I0708 19:44:54.423981   23434 main.go:141] libmachine: (functional-787563) DBG | Closing plugin on server side
I0708 19:44:54.423985   23434 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-787563 image ls --format yaml --alsologtostderr:
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244
repoTags:
- docker.io/library/nginx:latest
size: "191746190"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-787563
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "65908273"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1a9b40a9c3643c0085601e85e8ca7444f582278ac54a703644858a5f1f2959e6
repoDigests:
- localhost/minikube-local-cache-test@sha256:47b8a7d19bb81b562f3bbacf4e2c0370f3aacdc02afa7fd514a85810de390832
repoTags:
- localhost/minikube-local-cache-test:functional-787563
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117609954"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "63051080"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "112194888"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "85953433"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-787563 image ls --format yaml --alsologtostderr:
I0708 19:44:49.595427   23317 out.go:291] Setting OutFile to fd 1 ...
I0708 19:44:49.595686   23317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 19:44:49.595695   23317 out.go:304] Setting ErrFile to fd 2...
I0708 19:44:49.595698   23317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 19:44:49.595939   23317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
I0708 19:44:49.596526   23317 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0708 19:44:49.596634   23317 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0708 19:44:49.596996   23317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0708 19:44:49.597047   23317 main.go:141] libmachine: Launching plugin server for driver kvm2
I0708 19:44:49.611802   23317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45393
I0708 19:44:49.612313   23317 main.go:141] libmachine: () Calling .GetVersion
I0708 19:44:49.612867   23317 main.go:141] libmachine: Using API Version  1
I0708 19:44:49.612892   23317 main.go:141] libmachine: () Calling .SetConfigRaw
I0708 19:44:49.613243   23317 main.go:141] libmachine: () Calling .GetMachineName
I0708 19:44:49.613426   23317 main.go:141] libmachine: (functional-787563) Calling .GetState
I0708 19:44:49.615183   23317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0708 19:44:49.615216   23317 main.go:141] libmachine: Launching plugin server for driver kvm2
I0708 19:44:49.629660   23317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45815
I0708 19:44:49.630094   23317 main.go:141] libmachine: () Calling .GetVersion
I0708 19:44:49.630628   23317 main.go:141] libmachine: Using API Version  1
I0708 19:44:49.630652   23317 main.go:141] libmachine: () Calling .SetConfigRaw
I0708 19:44:49.630952   23317 main.go:141] libmachine: () Calling .GetMachineName
I0708 19:44:49.631091   23317 main.go:141] libmachine: (functional-787563) Calling .DriverName
I0708 19:44:49.631268   23317 ssh_runner.go:195] Run: systemctl --version
I0708 19:44:49.631286   23317 main.go:141] libmachine: (functional-787563) Calling .GetSSHHostname
I0708 19:44:49.634436   23317 main.go:141] libmachine: (functional-787563) DBG | domain functional-787563 has defined MAC address 52:54:00:76:c0:f8 in network mk-functional-787563
I0708 19:44:49.634851   23317 main.go:141] libmachine: (functional-787563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:c0:f8", ip: ""} in network mk-functional-787563: {Iface:virbr1 ExpiryTime:2024-07-08 20:41:29 +0000 UTC Type:0 Mac:52:54:00:76:c0:f8 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-787563 Clientid:01:52:54:00:76:c0:f8}
I0708 19:44:49.634887   23317 main.go:141] libmachine: (functional-787563) DBG | domain functional-787563 has defined IP address 192.168.39.54 and MAC address 52:54:00:76:c0:f8 in network mk-functional-787563
I0708 19:44:49.635024   23317 main.go:141] libmachine: (functional-787563) Calling .GetSSHPort
I0708 19:44:49.635177   23317 main.go:141] libmachine: (functional-787563) Calling .GetSSHKeyPath
I0708 19:44:49.635338   23317 main.go:141] libmachine: (functional-787563) Calling .GetSSHUsername
I0708 19:44:49.635497   23317 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/functional-787563/id_rsa Username:docker}
I0708 19:44:49.767041   23317 ssh_runner.go:195] Run: sudo crictl images --output json
I0708 19:44:49.837881   23317 main.go:141] libmachine: Making call to close driver server
I0708 19:44:49.837909   23317 main.go:141] libmachine: (functional-787563) Calling .Close
I0708 19:44:49.838164   23317 main.go:141] libmachine: Successfully made call to close driver server
I0708 19:44:49.838183   23317 main.go:141] libmachine: Making call to close connection to plugin binary
I0708 19:44:49.838194   23317 main.go:141] libmachine: Making call to close driver server
I0708 19:44:49.838202   23317 main.go:141] libmachine: (functional-787563) Calling .Close
I0708 19:44:49.838202   23317 main.go:141] libmachine: (functional-787563) DBG | Closing plugin on server side
I0708 19:44:49.838426   23317 main.go:141] libmachine: Successfully made call to close driver server
I0708 19:44:49.838440   23317 main.go:141] libmachine: (functional-787563) DBG | Closing plugin on server side
I0708 19:44:49.838461   23317 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787563 ssh pgrep buildkitd: exit status 1 (234.309482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image build -t localhost/my-image:functional-787563 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 image build -t localhost/my-image:functional-787563 testdata/build --alsologtostderr: (3.912285098s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-787563 image build -t localhost/my-image:functional-787563 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> efd3555ff1c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-787563
--> 1316b718ec7
Successfully tagged localhost/my-image:functional-787563
1316b718ec78b81d5be688ad182fa56ea718ac7945c6cf1406005c99a80df61a
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-787563 image build -t localhost/my-image:functional-787563 testdata/build --alsologtostderr:
I0708 19:44:50.131041   23370 out.go:291] Setting OutFile to fd 1 ...
I0708 19:44:50.131387   23370 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 19:44:50.131400   23370 out.go:304] Setting ErrFile to fd 2...
I0708 19:44:50.131407   23370 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 19:44:50.131734   23370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
I0708 19:44:50.132532   23370 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0708 19:44:50.133186   23370 config.go:182] Loaded profile config "functional-787563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0708 19:44:50.133741   23370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0708 19:44:50.133828   23370 main.go:141] libmachine: Launching plugin server for driver kvm2
I0708 19:44:50.148700   23370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
I0708 19:44:50.149198   23370 main.go:141] libmachine: () Calling .GetVersion
I0708 19:44:50.149736   23370 main.go:141] libmachine: Using API Version  1
I0708 19:44:50.149754   23370 main.go:141] libmachine: () Calling .SetConfigRaw
I0708 19:44:50.150142   23370 main.go:141] libmachine: () Calling .GetMachineName
I0708 19:44:50.150376   23370 main.go:141] libmachine: (functional-787563) Calling .GetState
I0708 19:44:50.152412   23370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0708 19:44:50.152461   23370 main.go:141] libmachine: Launching plugin server for driver kvm2
I0708 19:44:50.167346   23370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38803
I0708 19:44:50.167880   23370 main.go:141] libmachine: () Calling .GetVersion
I0708 19:44:50.168500   23370 main.go:141] libmachine: Using API Version  1
I0708 19:44:50.168537   23370 main.go:141] libmachine: () Calling .SetConfigRaw
I0708 19:44:50.168849   23370 main.go:141] libmachine: () Calling .GetMachineName
I0708 19:44:50.169047   23370 main.go:141] libmachine: (functional-787563) Calling .DriverName
I0708 19:44:50.169239   23370 ssh_runner.go:195] Run: systemctl --version
I0708 19:44:50.169273   23370 main.go:141] libmachine: (functional-787563) Calling .GetSSHHostname
I0708 19:44:50.172391   23370 main.go:141] libmachine: (functional-787563) DBG | domain functional-787563 has defined MAC address 52:54:00:76:c0:f8 in network mk-functional-787563
I0708 19:44:50.172747   23370 main.go:141] libmachine: (functional-787563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:c0:f8", ip: ""} in network mk-functional-787563: {Iface:virbr1 ExpiryTime:2024-07-08 20:41:29 +0000 UTC Type:0 Mac:52:54:00:76:c0:f8 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-787563 Clientid:01:52:54:00:76:c0:f8}
I0708 19:44:50.172774   23370 main.go:141] libmachine: (functional-787563) DBG | domain functional-787563 has defined IP address 192.168.39.54 and MAC address 52:54:00:76:c0:f8 in network mk-functional-787563
I0708 19:44:50.172929   23370 main.go:141] libmachine: (functional-787563) Calling .GetSSHPort
I0708 19:44:50.173088   23370 main.go:141] libmachine: (functional-787563) Calling .GetSSHKeyPath
I0708 19:44:50.173236   23370 main.go:141] libmachine: (functional-787563) Calling .GetSSHUsername
I0708 19:44:50.173392   23370 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/functional-787563/id_rsa Username:docker}
I0708 19:44:50.311035   23370 build_images.go:161] Building image from path: /tmp/build.745505907.tar
I0708 19:44:50.311112   23370 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0708 19:44:50.351004   23370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.745505907.tar
I0708 19:44:50.378850   23370 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.745505907.tar: stat -c "%s %y" /var/lib/minikube/build/build.745505907.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.745505907.tar': No such file or directory
I0708 19:44:50.378876   23370 ssh_runner.go:362] scp /tmp/build.745505907.tar --> /var/lib/minikube/build/build.745505907.tar (3072 bytes)
I0708 19:44:50.483761   23370 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.745505907
I0708 19:44:50.521813   23370 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.745505907 -xf /var/lib/minikube/build/build.745505907.tar
I0708 19:44:50.541669   23370 crio.go:315] Building image: /var/lib/minikube/build/build.745505907
I0708 19:44:50.541746   23370 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-787563 /var/lib/minikube/build/build.745505907 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0708 19:44:53.960709   23370 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-787563 /var/lib/minikube/build/build.745505907 --cgroup-manager=cgroupfs: (3.418939124s)
I0708 19:44:53.960786   23370 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.745505907
I0708 19:44:53.972240   23370 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.745505907.tar
I0708 19:44:53.983680   23370 build_images.go:217] Built localhost/my-image:functional-787563 from /tmp/build.745505907.tar
I0708 19:44:53.983720   23370 build_images.go:133] succeeded building to: functional-787563
I0708 19:44:53.983726   23370 build_images.go:134] failed building to: 
I0708 19:44:53.983749   23370 main.go:141] libmachine: Making call to close driver server
I0708 19:44:53.983761   23370 main.go:141] libmachine: (functional-787563) Calling .Close
I0708 19:44:53.984066   23370 main.go:141] libmachine: Successfully made call to close driver server
I0708 19:44:53.984086   23370 main.go:141] libmachine: Making call to close connection to plugin binary
I0708 19:44:53.984093   23370 main.go:141] libmachine: Making call to close driver server
I0708 19:44:53.984125   23370 main.go:141] libmachine: (functional-787563) Calling .Close
I0708 19:44:53.984149   23370 main.go:141] libmachine: (functional-787563) DBG | Closing plugin on server side
I0708 19:44:53.984354   23370 main.go:141] libmachine: Successfully made call to close driver server
I0708 19:44:53.984366   23370 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image ls
2024/07/08 19:44:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-787563
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-787563 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-787563 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-gzmzp" [586d22c6-94ee-4bce-bdaf-15cd4bd2a888] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-gzmzp" [586d22c6-94ee-4bce-bdaf-15cd4bd2a888] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00493392s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image load --daemon gcr.io/google-containers/addon-resizer:functional-787563 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 image load --daemon gcr.io/google-containers/addon-resizer:functional-787563 --alsologtostderr: (4.59878917s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image load --daemon gcr.io/google-containers/addon-resizer:functional-787563 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 image load --daemon gcr.io/google-containers/addon-resizer:functional-787563 --alsologtostderr: (2.404273571s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-787563
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image load --daemon gcr.io/google-containers/addon-resizer:functional-787563 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 image load --daemon gcr.io/google-containers/addon-resizer:functional-787563 --alsologtostderr: (6.885385452s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 service list -o json
functional_test.go:1490: Took "313.727958ms" to run "out/minikube-linux-amd64 -p functional-787563 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.54:30349
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.54:30349
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "266.744247ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "53.572187ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "333.911483ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "42.47075ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdany-port1005839477/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1720467878393052804" to /tmp/TestFunctionalparallelMountCmdany-port1005839477/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1720467878393052804" to /tmp/TestFunctionalparallelMountCmdany-port1005839477/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1720467878393052804" to /tmp/TestFunctionalparallelMountCmdany-port1005839477/001/test-1720467878393052804
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (209.909132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  8 19:44 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  8 19:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  8 19:44 test-1720467878393052804
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh cat /mount-9p/test-1720467878393052804
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-787563 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [635fb99f-c908-4897-9eca-db45b921fa95] Pending
helpers_test.go:344: "busybox-mount" [635fb99f-c908-4897-9eca-db45b921fa95] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [635fb99f-c908-4897-9eca-db45b921fa95] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [635fb99f-c908-4897-9eca-db45b921fa95] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004444035s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-787563 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdany-port1005839477/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image save gcr.io/google-containers/addon-resizer:functional-787563 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image rm gcr.io/google-containers/addon-resizer:functional-787563 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.281826487s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-787563
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 image save --daemon gcr.io/google-containers/addon-resizer:functional-787563 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-787563 image save --daemon gcr.io/google-containers/addon-resizer:functional-787563 --alsologtostderr: (1.060726651s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-787563
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdspecific-port3497785327/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.90642ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdspecific-port3497785327/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787563 ssh "sudo umount -f /mount-9p": exit status 1 (213.957388ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-787563 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdspecific-port3497785327/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3529591915/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3529591915/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3529591915/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T" /mount1: exit status 1 (234.760217ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-787563 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-787563 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3529591915/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3529591915/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-787563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3529591915/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-787563
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-787563
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-787563
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-511021 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0708 19:56:29.733900   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 19:57:52.782870   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-511021 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m16.214813415s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (196.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-511021 -- rollout status deployment/busybox: (2.706259268s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-5xjfx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-w8l78 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-x9p75 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-5xjfx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-w8l78 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-x9p75 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-5xjfx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-w8l78 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-x9p75 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-5xjfx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-5xjfx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-w8l78 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-w8l78 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-x9p75 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-511021 -- exec busybox-fc5497c4f-x9p75 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-511021 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-511021 -v=7 --alsologtostderr: (46.10456745s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-511021 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp testdata/cp-test.txt ha-511021:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3985602198/001/cp-test_ha-511021.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021:/home/docker/cp-test.txt ha-511021-m02:/home/docker/cp-test_ha-511021_ha-511021-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m02 "sudo cat /home/docker/cp-test_ha-511021_ha-511021-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021:/home/docker/cp-test.txt ha-511021-m03:/home/docker/cp-test_ha-511021_ha-511021-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m03 "sudo cat /home/docker/cp-test_ha-511021_ha-511021-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021:/home/docker/cp-test.txt ha-511021-m04:/home/docker/cp-test_ha-511021_ha-511021-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m04 "sudo cat /home/docker/cp-test_ha-511021_ha-511021-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp testdata/cp-test.txt ha-511021-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3985602198/001/cp-test_ha-511021-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m02:/home/docker/cp-test.txt ha-511021:/home/docker/cp-test_ha-511021-m02_ha-511021.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021 "sudo cat /home/docker/cp-test_ha-511021-m02_ha-511021.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m02:/home/docker/cp-test.txt ha-511021-m03:/home/docker/cp-test_ha-511021-m02_ha-511021-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m03 "sudo cat /home/docker/cp-test_ha-511021-m02_ha-511021-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m02:/home/docker/cp-test.txt ha-511021-m04:/home/docker/cp-test_ha-511021-m02_ha-511021-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m04 "sudo cat /home/docker/cp-test_ha-511021-m02_ha-511021-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp testdata/cp-test.txt ha-511021-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3985602198/001/cp-test_ha-511021-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt ha-511021:/home/docker/cp-test_ha-511021-m03_ha-511021.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021 "sudo cat /home/docker/cp-test_ha-511021-m03_ha-511021.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt ha-511021-m02:/home/docker/cp-test_ha-511021-m03_ha-511021-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m02 "sudo cat /home/docker/cp-test_ha-511021-m03_ha-511021-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m03:/home/docker/cp-test.txt ha-511021-m04:/home/docker/cp-test_ha-511021-m03_ha-511021-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m04 "sudo cat /home/docker/cp-test_ha-511021-m03_ha-511021-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp testdata/cp-test.txt ha-511021-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3985602198/001/cp-test_ha-511021-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt ha-511021:/home/docker/cp-test_ha-511021-m04_ha-511021.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021 "sudo cat /home/docker/cp-test_ha-511021-m04_ha-511021.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt ha-511021-m02:/home/docker/cp-test_ha-511021-m04_ha-511021-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m02 "sudo cat /home/docker/cp-test_ha-511021-m04_ha-511021-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 cp ha-511021-m04:/home/docker/cp-test.txt ha-511021-m03:/home/docker/cp-test_ha-511021-m04_ha-511021-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 ssh -n ha-511021-m03 "sudo cat /home/docker/cp-test_ha-511021-m04_ha-511021-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.461173001s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-511021 node delete m03 -v=7 --alsologtostderr: (16.689681388s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (353.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-511021 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0708 20:14:23.843655   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 20:14:32.783603   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
E0708 20:15:46.891761   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 20:16:29.733878   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-511021 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m52.630440725s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (353.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-511021 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-511021 --control-plane -v=7 --alsologtostderr: (1m9.661603341s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-511021 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-704595 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0708 20:19:23.844246   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-704595 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.852051044s)
--- PASS: TestJSONOutput/start/Command (61.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-704595 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-704595 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-704595 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-704595 --output=json --user=testUser: (7.380632992s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-261107 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-261107 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.574068ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d49f7e7-2c15-4622-b07c-abee155eebb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-261107] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"91ff2f91-6ddb-4b0e-a5c1-63bf4347bcf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19195"}}
	{"specversion":"1.0","id":"ba56a4c3-d1cc-470c-aa44-7eb7eaa51f65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"161bebb2-a4ba-4a4e-a2f6-b38f2d044008","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig"}}
	{"specversion":"1.0","id":"bc048a99-07c8-42da-b7c4-dfcea28ef2ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube"}}
	{"specversion":"1.0","id":"bae86325-4bb8-497b-ab91-7aba9f1302e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1d552479-32e3-431d-878a-5459d448478c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e9324d63-aa7a-4153-b030-38aadbba5d97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-261107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-261107
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (94.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-972456 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-972456 --driver=kvm2  --container-runtime=crio: (46.898916539s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-976321 --driver=kvm2  --container-runtime=crio
E0708 20:21:29.733252   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-976321 --driver=kvm2  --container-runtime=crio: (44.548086854s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-972456
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-976321
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-976321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-976321
helpers_test.go:175: Cleaning up "first-972456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-972456
--- PASS: TestMinikubeProfile (94.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-885445 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-885445 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.584745144s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-885445 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-885445 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-904852 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-904852 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.096880122s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904852 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904852 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-885445 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904852 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904852 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-904852
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-904852: (1.272279955s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-904852
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-904852: (21.217902549s)
--- PASS: TestMountStart/serial/RestartStopped (22.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904852 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904852 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-957088 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0708 20:24:23.843464   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-957088 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m35.514753189s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-957088 -- rollout status deployment/busybox: (2.339787173s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- exec busybox-fc5497c4f-fqkrd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- exec busybox-fc5497c4f-q6zth -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- exec busybox-fc5497c4f-fqkrd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- exec busybox-fc5497c4f-q6zth -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- exec busybox-fc5497c4f-fqkrd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- exec busybox-fc5497c4f-q6zth -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- exec busybox-fc5497c4f-fqkrd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- exec busybox-fc5497c4f-fqkrd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- exec busybox-fc5497c4f-q6zth -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-957088 -- exec busybox-fc5497c4f-q6zth -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-957088 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-957088 -v 3 --alsologtostderr: (41.269379934s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-957088 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp testdata/cp-test.txt multinode-957088:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp multinode-957088:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4089420253/001/cp-test_multinode-957088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp multinode-957088:/home/docker/cp-test.txt multinode-957088-m02:/home/docker/cp-test_multinode-957088_multinode-957088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m02 "sudo cat /home/docker/cp-test_multinode-957088_multinode-957088-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp multinode-957088:/home/docker/cp-test.txt multinode-957088-m03:/home/docker/cp-test_multinode-957088_multinode-957088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m03 "sudo cat /home/docker/cp-test_multinode-957088_multinode-957088-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp testdata/cp-test.txt multinode-957088-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp multinode-957088-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4089420253/001/cp-test_multinode-957088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp multinode-957088-m02:/home/docker/cp-test.txt multinode-957088:/home/docker/cp-test_multinode-957088-m02_multinode-957088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088 "sudo cat /home/docker/cp-test_multinode-957088-m02_multinode-957088.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp multinode-957088-m02:/home/docker/cp-test.txt multinode-957088-m03:/home/docker/cp-test_multinode-957088-m02_multinode-957088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m03 "sudo cat /home/docker/cp-test_multinode-957088-m02_multinode-957088-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp testdata/cp-test.txt multinode-957088-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp multinode-957088-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4089420253/001/cp-test_multinode-957088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp multinode-957088-m03:/home/docker/cp-test.txt multinode-957088:/home/docker/cp-test_multinode-957088-m03_multinode-957088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088 "sudo cat /home/docker/cp-test_multinode-957088-m03_multinode-957088.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 cp multinode-957088-m03:/home/docker/cp-test.txt multinode-957088-m02:/home/docker/cp-test_multinode-957088-m03_multinode-957088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 ssh -n multinode-957088-m02 "sudo cat /home/docker/cp-test_multinode-957088-m03_multinode-957088-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-957088 node stop m03: (1.513275394s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-957088 status: exit status 7 (422.108536ms)

                                                
                                                
-- stdout --
	multinode-957088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-957088-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-957088-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-957088 status --alsologtostderr: exit status 7 (425.465909ms)

                                                
                                                
-- stdout --
	multinode-957088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-957088-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-957088-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:25:22.746771   43027 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:25:22.747042   43027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:25:22.747053   43027 out.go:304] Setting ErrFile to fd 2...
	I0708 20:25:22.747057   43027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:25:22.747318   43027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:25:22.747556   43027 out.go:298] Setting JSON to false
	I0708 20:25:22.747589   43027 mustload.go:65] Loading cluster: multinode-957088
	I0708 20:25:22.747706   43027 notify.go:220] Checking for updates...
	I0708 20:25:22.748026   43027 config.go:182] Loaded profile config "multinode-957088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:25:22.748045   43027 status.go:255] checking status of multinode-957088 ...
	I0708 20:25:22.748479   43027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:25:22.748552   43027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:25:22.763625   43027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43791
	I0708 20:25:22.764032   43027 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:25:22.764606   43027 main.go:141] libmachine: Using API Version  1
	I0708 20:25:22.764634   43027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:25:22.765016   43027 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:25:22.765233   43027 main.go:141] libmachine: (multinode-957088) Calling .GetState
	I0708 20:25:22.766832   43027 status.go:330] multinode-957088 host status = "Running" (err=<nil>)
	I0708 20:25:22.766850   43027 host.go:66] Checking if "multinode-957088" exists ...
	I0708 20:25:22.767274   43027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:25:22.767330   43027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:25:22.782198   43027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34473
	I0708 20:25:22.782584   43027 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:25:22.783012   43027 main.go:141] libmachine: Using API Version  1
	I0708 20:25:22.783032   43027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:25:22.783381   43027 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:25:22.783589   43027 main.go:141] libmachine: (multinode-957088) Calling .GetIP
	I0708 20:25:22.786314   43027 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:25:22.786736   43027 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:25:22.786757   43027 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:25:22.786898   43027 host.go:66] Checking if "multinode-957088" exists ...
	I0708 20:25:22.787179   43027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:25:22.787223   43027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:25:22.802334   43027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I0708 20:25:22.802679   43027 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:25:22.803096   43027 main.go:141] libmachine: Using API Version  1
	I0708 20:25:22.803118   43027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:25:22.803390   43027 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:25:22.803576   43027 main.go:141] libmachine: (multinode-957088) Calling .DriverName
	I0708 20:25:22.803788   43027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:25:22.803825   43027 main.go:141] libmachine: (multinode-957088) Calling .GetSSHHostname
	I0708 20:25:22.806230   43027 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:25:22.806574   43027 main.go:141] libmachine: (multinode-957088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:56:e9", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:23:04 +0000 UTC Type:0 Mac:52:54:00:f1:56:e9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-957088 Clientid:01:52:54:00:f1:56:e9}
	I0708 20:25:22.806604   43027 main.go:141] libmachine: (multinode-957088) DBG | domain multinode-957088 has defined IP address 192.168.39.44 and MAC address 52:54:00:f1:56:e9 in network mk-multinode-957088
	I0708 20:25:22.806724   43027 main.go:141] libmachine: (multinode-957088) Calling .GetSSHPort
	I0708 20:25:22.806867   43027 main.go:141] libmachine: (multinode-957088) Calling .GetSSHKeyPath
	I0708 20:25:22.806986   43027 main.go:141] libmachine: (multinode-957088) Calling .GetSSHUsername
	I0708 20:25:22.807106   43027 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/multinode-957088/id_rsa Username:docker}
	I0708 20:25:22.887957   43027 ssh_runner.go:195] Run: systemctl --version
	I0708 20:25:22.894489   43027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:25:22.911416   43027 kubeconfig.go:125] found "multinode-957088" server: "https://192.168.39.44:8443"
	I0708 20:25:22.911487   43027 api_server.go:166] Checking apiserver status ...
	I0708 20:25:22.911538   43027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 20:25:22.928858   43027 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1118/cgroup
	W0708 20:25:22.940148   43027 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1118/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 20:25:22.940203   43027 ssh_runner.go:195] Run: ls
	I0708 20:25:22.945502   43027 api_server.go:253] Checking apiserver healthz at https://192.168.39.44:8443/healthz ...
	I0708 20:25:22.949955   43027 api_server.go:279] https://192.168.39.44:8443/healthz returned 200:
	ok
	I0708 20:25:22.949976   43027 status.go:422] multinode-957088 apiserver status = Running (err=<nil>)
	I0708 20:25:22.949986   43027 status.go:257] multinode-957088 status: &{Name:multinode-957088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:25:22.950001   43027 status.go:255] checking status of multinode-957088-m02 ...
	I0708 20:25:22.950294   43027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:25:22.950325   43027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:25:22.966985   43027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0708 20:25:22.967507   43027 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:25:22.967995   43027 main.go:141] libmachine: Using API Version  1
	I0708 20:25:22.968017   43027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:25:22.968391   43027 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:25:22.968596   43027 main.go:141] libmachine: (multinode-957088-m02) Calling .GetState
	I0708 20:25:22.970259   43027 status.go:330] multinode-957088-m02 host status = "Running" (err=<nil>)
	I0708 20:25:22.970277   43027 host.go:66] Checking if "multinode-957088-m02" exists ...
	I0708 20:25:22.970668   43027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:25:22.970707   43027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:25:22.986231   43027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0708 20:25:22.986662   43027 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:25:22.987122   43027 main.go:141] libmachine: Using API Version  1
	I0708 20:25:22.987144   43027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:25:22.987435   43027 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:25:22.987644   43027 main.go:141] libmachine: (multinode-957088-m02) Calling .GetIP
	I0708 20:25:22.990505   43027 main.go:141] libmachine: (multinode-957088-m02) DBG | domain multinode-957088-m02 has defined MAC address 52:54:00:31:80:14 in network mk-multinode-957088
	I0708 20:25:22.990896   43027 main.go:141] libmachine: (multinode-957088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:80:14", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:24:01 +0000 UTC Type:0 Mac:52:54:00:31:80:14 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-957088-m02 Clientid:01:52:54:00:31:80:14}
	I0708 20:25:22.990929   43027 main.go:141] libmachine: (multinode-957088-m02) DBG | domain multinode-957088-m02 has defined IP address 192.168.39.125 and MAC address 52:54:00:31:80:14 in network mk-multinode-957088
	I0708 20:25:22.991043   43027 host.go:66] Checking if "multinode-957088-m02" exists ...
	I0708 20:25:22.991334   43027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:25:22.991370   43027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:25:23.006394   43027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I0708 20:25:23.006830   43027 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:25:23.007292   43027 main.go:141] libmachine: Using API Version  1
	I0708 20:25:23.007313   43027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:25:23.007659   43027 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:25:23.007826   43027 main.go:141] libmachine: (multinode-957088-m02) Calling .DriverName
	I0708 20:25:23.008015   43027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 20:25:23.008034   43027 main.go:141] libmachine: (multinode-957088-m02) Calling .GetSSHHostname
	I0708 20:25:23.010698   43027 main.go:141] libmachine: (multinode-957088-m02) DBG | domain multinode-957088-m02 has defined MAC address 52:54:00:31:80:14 in network mk-multinode-957088
	I0708 20:25:23.011085   43027 main.go:141] libmachine: (multinode-957088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:80:14", ip: ""} in network mk-multinode-957088: {Iface:virbr1 ExpiryTime:2024-07-08 21:24:01 +0000 UTC Type:0 Mac:52:54:00:31:80:14 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-957088-m02 Clientid:01:52:54:00:31:80:14}
	I0708 20:25:23.011123   43027 main.go:141] libmachine: (multinode-957088-m02) DBG | domain multinode-957088-m02 has defined IP address 192.168.39.125 and MAC address 52:54:00:31:80:14 in network mk-multinode-957088
	I0708 20:25:23.011268   43027 main.go:141] libmachine: (multinode-957088-m02) Calling .GetSSHPort
	I0708 20:25:23.011424   43027 main.go:141] libmachine: (multinode-957088-m02) Calling .GetSSHKeyPath
	I0708 20:25:23.011574   43027 main.go:141] libmachine: (multinode-957088-m02) Calling .GetSSHUsername
	I0708 20:25:23.011704   43027 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19195-5988/.minikube/machines/multinode-957088-m02/id_rsa Username:docker}
	I0708 20:25:23.095313   43027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 20:25:23.110353   43027 status.go:257] multinode-957088-m02 status: &{Name:multinode-957088-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0708 20:25:23.110393   43027 status.go:255] checking status of multinode-957088-m03 ...
	I0708 20:25:23.110758   43027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0708 20:25:23.110797   43027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0708 20:25:23.125888   43027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0708 20:25:23.126274   43027 main.go:141] libmachine: () Calling .GetVersion
	I0708 20:25:23.126709   43027 main.go:141] libmachine: Using API Version  1
	I0708 20:25:23.126732   43027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0708 20:25:23.127048   43027 main.go:141] libmachine: () Calling .GetMachineName
	I0708 20:25:23.127226   43027 main.go:141] libmachine: (multinode-957088-m03) Calling .GetState
	I0708 20:25:23.128604   43027 status.go:330] multinode-957088-m03 host status = "Stopped" (err=<nil>)
	I0708 20:25:23.128617   43027 status.go:343] host is not running, skipping remaining checks
	I0708 20:25:23.128622   43027 status.go:257] multinode-957088-m03 status: &{Name:multinode-957088-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-957088 node start m03 -v=7 --alsologtostderr: (26.671364591s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-957088 node delete m03: (1.829095252s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-957088 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0708 20:34:23.844401   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-957088 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.559887312s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-957088 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-957088
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-957088-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-957088-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.131711ms)

                                                
                                                
-- stdout --
	* [multinode-957088-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19195
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-957088-m02' is duplicated with machine name 'multinode-957088-m02' in profile 'multinode-957088'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-957088-m03 --driver=kvm2  --container-runtime=crio
E0708 20:36:29.734012   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-957088-m03 --driver=kvm2  --container-runtime=crio: (44.652365053s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-957088
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-957088: exit status 80 (204.195417ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-957088 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-957088-m03 already exists in multinode-957088-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-957088-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.94s)

                                                
                                    
x
+
TestScheduledStopUnix (114.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-485334 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-485334 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.794759145s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-485334 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-485334 -n scheduled-stop-485334
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-485334 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-485334 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-485334 -n scheduled-stop-485334
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-485334
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-485334 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0708 20:41:29.732879   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-485334
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-485334: exit status 7 (64.166011ms)

                                                
                                                
-- stdout --
	scheduled-stop-485334
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-485334 -n scheduled-stop-485334
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-485334 -n scheduled-stop-485334: exit status 7 (61.949282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-485334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-485334
--- PASS: TestScheduledStopUnix (114.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (228.44s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4218609156 start -p running-upgrade-634376 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4218609156 start -p running-upgrade-634376 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m21.790931828s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-634376 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0708 20:44:23.843412   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-634376 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.840185816s)
helpers_test.go:175: Cleaning up "running-upgrade-634376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-634376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-634376: (1.170346812s)
--- PASS: TestRunningBinaryUpgrade (228.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-596857 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-596857 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (75.436934ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-596857] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19195
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-596857 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-596857 --driver=kvm2  --container-runtime=crio: (1m36.257789851s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-596857 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-088829 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-088829 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (110.989271ms)

                                                
                                                
-- stdout --
	* [false-088829] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19195
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 20:43:14.687216   51221 out.go:291] Setting OutFile to fd 1 ...
	I0708 20:43:14.687371   51221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:43:14.687382   51221 out.go:304] Setting ErrFile to fd 2...
	I0708 20:43:14.687387   51221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 20:43:14.687661   51221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19195-5988/.minikube/bin
	I0708 20:43:14.688351   51221 out.go:298] Setting JSON to false
	I0708 20:43:14.689397   51221 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5144,"bootTime":1720466251,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0708 20:43:14.689479   51221 start.go:139] virtualization: kvm guest
	I0708 20:43:14.691786   51221 out.go:177] * [false-088829] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0708 20:43:14.693381   51221 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 20:43:14.693374   51221 notify.go:220] Checking for updates...
	I0708 20:43:14.696148   51221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 20:43:14.697603   51221 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19195-5988/kubeconfig
	I0708 20:43:14.698848   51221 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19195-5988/.minikube
	I0708 20:43:14.700338   51221 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0708 20:43:14.701646   51221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 20:43:14.703617   51221 config.go:182] Loaded profile config "NoKubernetes-596857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0708 20:43:14.703735   51221 config.go:182] Loaded profile config "old-k8s-version-914355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0708 20:43:14.703810   51221 config.go:182] Loaded profile config "running-upgrade-634376": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0708 20:43:14.703904   51221 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 20:43:14.746112   51221 out.go:177] * Using the kvm2 driver based on user configuration
	I0708 20:43:14.747583   51221 start.go:297] selected driver: kvm2
	I0708 20:43:14.747601   51221 start.go:901] validating driver "kvm2" against <nil>
	I0708 20:43:14.747612   51221 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 20:43:14.749595   51221 out.go:177] 
	W0708 20:43:14.750707   51221 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0708 20:43:14.751987   51221 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-088829 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-088829" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-088829

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-088829"

                                                
                                                
----------------------- debugLogs end: false-088829 [took: 2.962900189s] --------------------------------
helpers_test.go:175: Cleaning up "false-088829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-088829
--- PASS: TestNetworkPlugins/group/false (3.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-596857 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-596857 --no-kubernetes --driver=kvm2  --container-runtime=crio: (37.991418734s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-596857 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-596857 status -o json: exit status 2 (246.61582ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-596857","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-596857
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-596857 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-596857 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.036874191s)
--- PASS: TestNoKubernetes/serial/Start (27.04s)

                                                
                                    
x
+
TestPause/serial/Start (104.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-897827 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-897827 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m44.610964393s)
--- PASS: TestPause/serial/Start (104.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-596857 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-596857 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.651378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-596857
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-596857: (1.288399969s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-596857 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-596857 --driver=kvm2  --container-runtime=crio: (44.067283489s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-596857 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-596857 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.041146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-897827 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-897827 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.756495542s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.80s)

                                                
                                    
x
+
TestPause/serial/Pause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-897827 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-897827 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-897827 --output=json --layout=cluster: exit status 2 (304.557881ms)

                                                
                                                
-- stdout --
	{"Name":"pause-897827","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-897827","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-897827 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.92s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-897827 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-897827 --alsologtostderr -v=5: (1.094244024s)
--- PASS: TestPause/serial/PauseAgain (1.09s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-897827 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-897827 --alsologtostderr -v=5: (1.497267542s)
--- PASS: TestPause/serial/DeletePaused (1.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-028021 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-028021 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m15.110476939s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (111.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-239931 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0708 20:47:52.785375   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-239931 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m51.948934222s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (111.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-028021 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b] Pending
helpers_test.go:344: "busybox" [fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd9a12f5-1cee-4bb0-aa1b-2ee78ab9062b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003748089s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-028021 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-028021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-028021 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-914355 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-914355 --alsologtostderr -v=3: (4.351547669s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914355 -n old-k8s-version-914355: exit status 7 (64.443783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-914355 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-239931 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [34e0f1fb-6d58-4bd9-8328-c9c5fc2936af] Pending
helpers_test.go:344: "busybox" [34e0f1fb-6d58-4bd9-8328-c9c5fc2936af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0708 20:49:06.895024   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
helpers_test.go:344: "busybox" [34e0f1fb-6d58-4bd9-8328-c9c5fc2936af] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004525926s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-239931 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-239931 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-239931 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.055417669s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-239931 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-071971 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-071971 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m0.40029882s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (632.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-028021 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-028021 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (10m32.097559404s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028021 -n no-preload-028021
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (632.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-071971 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5cbbc6b2-edca-4a31-95ba-4459b6944106] Pending
helpers_test.go:344: "busybox" [5cbbc6b2-edca-4a31-95ba-4459b6944106] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5cbbc6b2-edca-4a31-95ba-4459b6944106] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004521693s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-071971 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-071971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-071971 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (571.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-239931 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-239931 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (9m31.425241196s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-239931 -n embed-certs-239931
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (571.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (481.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-071971 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0708 20:54:23.844279   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
E0708 20:56:29.733524   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-071971 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (8m1.418100365s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (481.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.845593864 start -p stopped-upgrade-957981 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.845593864 start -p stopped-upgrade-957981 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (51.061528939s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.845593864 -p stopped-upgrade-957981 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.845593864 -p stopped-upgrade-957981 stop: (2.166267272s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-957981 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0708 21:18:14.566700   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:18:14.572052   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:18:14.582400   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:18:14.603036   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:18:14.643520   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:18:14.723973   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:18:14.884429   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:18:15.205100   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-957981 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.270494252s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (69.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-292907 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-292907 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m9.694528539s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (69.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (102.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0708 21:18:19.687161   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:18:24.807684   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m42.549601147s)
--- PASS: TestNetworkPlugins/group/auto/Start (102.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-292907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-292907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.698898158s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-292907 --alsologtostderr -v=3
E0708 21:18:35.048525   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-292907 --alsologtostderr -v=3: (7.360163533s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-292907 -n newest-cni-292907
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-292907 -n newest-cni-292907: exit status 7 (62.38976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-292907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (50.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-292907 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-292907 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (49.768523727s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-292907 -n newest-cni-292907
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (50.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-957981
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0708 21:18:55.529552   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:19:23.843585   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/functional-787563/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m24.231035392s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-292907 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-292907 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-292907 -n newest-cni-292907
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-292907 -n newest-cni-292907: exit status 2 (242.099493ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-292907 -n newest-cni-292907
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-292907 -n newest-cni-292907: exit status 2 (247.352279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-292907 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-292907 -n newest-cni-292907
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-292907 -n newest-cni-292907
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0708 21:19:36.490577   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m26.775079116s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-071971 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-071971 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971: exit status 2 (276.240614ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971: exit status 2 (256.207255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-071971 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-071971 -n default-k8s-diff-port-071971
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (88.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m28.701110386s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (88.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-088829 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-088829 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ld7jc" [b2fa34e4-3cfa-4959-b484-fc35f63f99f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ld7jc" [b2fa34e4-3cfa-4959-b484-fc35f63f99f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006577439s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7ttvz" [6240ca02-8e31-41dc-a3c2-eb39e62898e2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004306334s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-088829 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-088829 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-088829 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-m2dzb" [4c972aa3-b760-42f8-8daa-9fcb1a16c60c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-m2dzb" [4c972aa3-b760-42f8-8daa-9fcb1a16c60c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005333057s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-088829 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (102.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m42.121454058s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (102.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (97.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0708 21:20:56.753790   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
E0708 21:20:56.759098   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
E0708 21:20:56.769421   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
E0708 21:20:56.789826   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
E0708 21:20:56.830250   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
E0708 21:20:56.911139   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
E0708 21:20:57.071552   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
E0708 21:20:57.392143   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m37.078348346s)
--- PASS: TestNetworkPlugins/group/flannel/Start (97.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-78bsp" [b1671e67-45fd-4808-82db-744eed92c721] Running
E0708 21:20:58.032833   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
E0708 21:20:58.411586   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/no-preload-028021/client.crt: no such file or directory
E0708 21:20:59.313350   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
E0708 21:21:01.873939   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006470765s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-088829 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-088829 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2cm89" [77370a4c-f7fb-4be9-acba-6a6cc6c1f19b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0708 21:21:06.994343   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-2cm89" [77370a4c-f7fb-4be9-acba-6a6cc6c1f19b] Running
E0708 21:21:12.786591   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/addons-268316/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005197936s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-088829 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-088829 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-088829 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hftzc" [56caa77c-d0c0-4810-bfa5-66bc767931e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hftzc" [56caa77c-d0c0-4810-bfa5-66bc767931e9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004393206s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-088829 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0708 21:21:37.716087   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-088829 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m6.280699905s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-088829 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-088829 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-d862k" [2cfe008d-8407-450a-a4b0-47b8dfab6c27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0708 21:22:18.676690   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/default-k8s-diff-port-071971/client.crt: no such file or directory
E0708 21:22:19.106008   13141 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19195-5988/.minikube/profiles/old-k8s-version-914355/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-d862k" [2cfe008d-8407-450a-a4b0-47b8dfab6c27] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003799804s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zcg2x" [837873af-e94c-40a2-b7d0-a9d869db6463] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004740423s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-088829 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-088829 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-088829 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-x4xv2" [676ef905-6c4e-4239-b012-515bc0b6c7d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-x4xv2" [676ef905-6c4e-4239-b012-515bc0b6c7d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003798479s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-088829 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-088829 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7gflz" [eb6866c1-345c-44eb-903e-0a584badb3a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7gflz" [eb6866c1-345c-44eb-903e-0a584badb3a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004535828s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-088829 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-088829 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-088829 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (37/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.2/cached-images 0
15 TestDownloadOnly/v1.30.2/binaries 0
16 TestDownloadOnly/v1.30.2/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
41 TestAddons/parallel/Volcano 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
257 TestStartStop/group/disable-driver-mounts 0.14
264 TestNetworkPlugins/group/kubenet 2.7
272 TestNetworkPlugins/group/cilium 4.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-733920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-733920
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-088829 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-088829" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-088829

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-088829"

                                                
                                                
----------------------- debugLogs end: kubenet-088829 [took: 2.548625069s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-088829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-088829
--- SKIP: TestNetworkPlugins/group/kubenet (2.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-088829 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-088829" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-088829

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-088829" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-088829"

                                                
                                                
----------------------- debugLogs end: cilium-088829 [took: 3.939247312s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-088829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-088829
--- SKIP: TestNetworkPlugins/group/cilium (4.19s)

                                                
                                    
Copied to clipboard